Although internet seems to be the ocean of information which provides almost anything we want but abstracting the specified set of high quality data from internet becomes impossible in 99% cases.In most of the cases the user has to get satisfied with the surface web which constitutes only 1% of the total data available. Many a times user gets information only from static sites while most of the data available on net are stored in dyanamically generated sites which stands in complete contrast with the static sites both qualitatively and quantitatively. To help the user overcome such difficulties “ THE HIDDEN WEB CRAWLERS”stands as a great source . The deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request in response to which a hidden web crawler starts the process of making dozens of direct queries simultaneously , using multiple-thread technology and thus is capable of identifying, retrieving, , classifying, and organizing "deep" content. This paper highlights the comparison between surface web and hidden web, basic working principles, components, importance and future scope of this indespensible tool i.e Hidden Web Crawler.