Abstract
Image Search engines mostly use keywords and they rely on surrounding text for searching images. Ambiguity of query images is hard to describe accurately by using keywords.Eg:Apple is query keyword then categories can be “red apple”,”apple laptop” etc.Another challenge is without online training low level features may not well co-relate with high level semantic meanings. Low-level features are sometimes inconsistent with visual perception. The visual and textual features of images are then projected into their related semantic spaces to get semantic signatures. In online stage images are re-ranked by comparing semantic signatures obtained from semantic space obtained from query keywords. Semantic space of a query keyword can be described by just 20 – 30 concepts (also referred as “reference classes