A Review of Extraction of Image, Text-line and Keywords from Document Image
Sneha L. Bagadkar, Dr. L. G. Malik
In Document processing typed and handwritten text on paper-based & electronic documents converted into electronic information. To electronically process the contents of printed documents, information must be extracted from digitally scanned images. The manipulation of printed documents is largely in use like printed forms are delivered to end users for completion, storage and verification etc. In such situations these printed documents must return to digital form in order to participate in digitalized workflows. In printed documents, the contents of different regions and fields are highly heterogeneous. They have different layout, different printing quality and typing standards. The text line, keywords and image extraction from such complex printed document can be a difficult problem.
Design and Development of a Remote Monitoring and Maintenance of Solar Plant Supervisory System
R. Nagalakshmi, B. Kishore Babu, D. Prashanth
In this Paper we are implementing Prototype solar PV monitoring and optimization includes a data acquisition system, Supervisory monitoring and control station at plant level and Decision Support System (DSS) at the Central Control Station. This Prototype System consist two plant level monitoring (PLM) systems and Central Control Station (CCS). One of Plant level system is for basic data acquisition from weather sensors related to sun energy and Sting Monitoring Units. Another plant level system will have features along with the solar tracker system. The CCS continuous on-line monitoring, control, storage, and reporting at plant level and collects data from PLM’s via wireless module with time stamp for real time processing, storage, alarming, reporting and displaying. This system designed monitoring and analytics system.
The Performance Analysis of OFDM-IDMA in Wireless Communication by Using an Iterative Sub-optimal Receiver Structure
Y.Sukanya, K.Lakshmana Rao
This paper provides a comprehensive study of OFDM-IDMA. Comparisons with other alternative technology such as OFDM-CDMA are provided. Cyclic Prefix reduces the ISI in the OFDM Block and MAI is reduced by IDMA in the AWGN channel. A signal-to-noise ratio (SNR) evolution technique is developed to predict the bit-error-rate (BER) performance of OFDM-IDMA by using BPSK modulation. Also, we provide simulation results in the matlab environment.
Link Analysis in Relational Databases using Data Mining Techniques
Smita Shinde, Amol Rajmane
In data mining approach it simply assumes a random sample of different items of a single relational database. Many data mining techniques can be used to extract information from the data. Here we have proposed the work which introduces link analysis procedure discovers relationships between relational databases or graph. This work can be useful on single relational databases as well as multiple relational databases. This approach interested to analyze two or more relational databases. By taking a random walk on the database, different states can be defined. In first part relational database is displayed in terms of graph and in second part markov chain, which contains only the interested elements and preserves the characteristics of original chain and all those elements are analyzed by projecting diffusion map. Several datasets are analyzed by using the proposed methodology showing the benefits of this technique for finding relationships of relational databases or graphs.
Using Educational Data Mining (EDM) to Prediction and Classify Students
Samira Talebi, Ali Asghar Sayficar
The aim of this paper is to predict the students’ academic performance. It is useful for identifying weak students at an earlier stage. In this study, we used WEKA open source data mining tool to analyze attributes for predicting students’ academic performance. The data set comprised of 180 student records and 21attributes of students registered between year 2010 and 2013. We chosethem from FERDOWSIUniversity of Mashhad .We applied the data set to four classifiers (Naive Bayes, LBR,NBTree, Best -First Decision Tree) and obtained the accuracy of predicting the students’ performance into either successful or unsuccessful class. The student's academic performance can be predicted by using past experience knowledge discovered from the existing database. A cross-validation with 10 folds was used to evaluate the prediction accuracy. The result showed that Naive Bayes classifier scored the higher percentage of prediction F-Measure of 83.9%.
Miss M.A. ShehnazBegum, Miss N.Nalini,Mrs.A.Angayarkanni
T-Drive: Enhancing Driving Directions with Taxi Drivers’ Intelligence
Miss M.A. ShehnazBegum, Miss N.Nalini,Mrs.A.Angayarkanni
GPS-equipped taxis uses the sensor device to detect the road surface, and taxi drivers are usually experienced with finding the fastest route to reach the destination based on their knowledge. This takes much time and rough routes are generated. In this paper, we process the smart driving directions from the historical dataset collected from large number of taxis. In our approach, we propose a time-dependent landmark graph, where the landmark is denoted as node is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers. We have used a Variance-Entropy-Based Clustering approach to estimate the distribution of travel time between source and destination in different time slots. We also calculate the distance taken to travel the location. Based on this, we design a two-stage routing algorithm to compute the practically fastest route. We build our project based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.
Rural Marketing in India: Challenges And Opportunities
Arshi Talwar, Shweta Popli, Sneha Gupta
In the recent years rural market have acquired significance and attract the attention of marketers as 68.84% population of India reside in 6, 38,000 villages and overall growth of economy has resulted into substantial increase in the purchasing power of the rural communities. Due to green revolution, the rural areas are consuming a large quantity of industrial and manufactured products. In this way rural market offers opportunities in the form of large untapped market, increase in disposable income, increase in literacy level and large scope for penetration. To take the advantage of these opportunities, a special marketing strategy ‘Rural Marketing’ has emerged. This paper tries to understand the rural market, importance of rural marketing and status of rural market. The main aim of the study to observe the potentiality of Indian rural markets and find out various problems are being faced by rural marketer.
Mobile Cloud Computing (MCC) is a combination of Cloud computing and mobile networks. It is a technique or model in which mobile applications are built, powered and hosted using cloud computing technology. The capabilities of mobile devices have been improving quickly than computers. Many researchers focus on the area of mobile computing and cloud computing. The mobile computing means to access shared data or infrastructure through portable devices like PDA, smart phone, tablet and so on. Independently from physical location and cloud computing means a virtual computing, distributed computing or resources sharing. Mobile uses the cloud for both application development as well as hosting. The most of application in mobile is cloud based application i.e. IE, social networking apps like facebook apps, that accessible through cloud (internet). It provides the user to interface the data and services on the cloud platform .The mobile computing needed to be limited energy than regular cloud computing.
Several information channels are required with several different RF transmission frequencies for wireless communication because of requirements of many selective RF receivers. This requirement leads to the complex network which indirectly increases the cost of the system. This paper provides the solution for the above problem by using a Time Division Multiplexing (TDM) system which uses distributed system of data acquisition and control in which pair of one twisted wire is used to transmit the multichannel data into different time slots. A low frequency carrier communication (LFCC) is used as a highly modular data point oriented building block which produces a cost effective solution.
K. Mehrajul Haq (Pg Scholar), P. Jaya Rami Reddy M.Tech
A Novel Framework for Satellite Image Resolution Enhancement Using Wavelet-Domain Approach Based On Dt-Cwt and NLM
K. Mehrajul Haq (Pg Scholar), P. Jaya Rami Reddy M.Tech
In this paper, Resolution enhancement (RE) model has one of the biggest disadvantage (drawback) is losing high frequency contents (which results in blurring). Discrete wavelet transform based (DWT) RE scheme generates artifacts due to a DWT shift-variant property. For RE of the satellite images, here, new wavelet domain approach based on dual-tree complex wavelet transform (DT-CWT) and nonlocal means (NLM) is proposed. A satellite input image is decomposed by DT-CWT which is nearly shift invariant to obtain high-frequency sub bands. To interpolate the high-frequency sub bands and the low-resolution (LR) input images; in this paper the Lanczos interpolator is used. The high frequency sub bands are passed through an NLM filter to cater for the artifacts generated by DT-CWT. To obtain a resolution-enhanced image we are combining the filtered high-frequency sub bands and the LR input image by using inverse DT-CWT. Objective and subjective analyses show superiority of the new proposed technique over the conventional and state-of-the-art RE techniques.
Yashavi Bhole, Komal Binawade, Rahul Bhosle, Prof. Dhanashri Joshi
Toll collection and speed monitoring system using RFID
Yashavi Bhole, Komal Binawade, Rahul Bhosle, Prof. Dhanashri Joshi
Radio Frequency Identification (RFID) is an auto identification technology which uses Radio Frequencies (between 30 kHz and 2.5GHz) to identify objects remotely. The automated toll collection system using passive Radio Frequency Identification (RFID) tag emerges as a convincing solution to the manual toll collection method employed at tollgates. Time and efficiency are a matter of priority of present day. In order to overcome the major issues of vehicle congestion and time consumption RFID technology is used. RFID reader fixed at tollgate frame (or even a hand held reader at manual lane, in case RFID tagged vehicle enters manual toll paying lane) reads the tag attached to windshield of vehicle. The object detection sensor in the reader detects the approach of the incoming vehicle’s tag and toll deduction takes place through a prepaid card assigned to the concerned RFID tag that belongs to the owners’ account. This makes tollgate transaction more convenient for the public use. A system which does the job of detecting, billing and accounting for vehicles as they pass through a tollgate using RFID as the identification technology. The system is a great investment in the transport industry. It reduces the common hustles in accounting for the movement of goods from point to point. An RFID tag is programmed with information in the form of an Electronic Product Code (EPC), which can be read over a considerable distance so that its contents identify the vehicle and enhance a transaction to be undertaken with respect to the specific tag identity taking advantage of radio frequencies ability to travel longer ranges with better data capacities and high speed attained with maximum accuracy.
The paper is a short discussion on one of the most important encryption technique ever since it was deduced, this technique utilizes the symmetric key algorithm of Network Security. The symmetric key algorithm is an algorithmwhich makes utilization two keys a public key and a private key so as to limit the data access and thus maintain the much needed secrecy of data.In symmetric key encryption algorithm the private key is the secret key as it rests with the sender and receiver only, this private key is responsible for the secrecy of data. Before we move onto detailed understanding of the algorithm it is necessary to know where is it being implemented or where you will get to see it and the answer to this question is network devices such as modem, routers, bridges, gateway, repeater hubs, and switches and any other device which involves transfer of data that is needed to be kept secret during its transmission to receiver.
Web Intelligence-An Emerging vertical of Artificial Intelligence
Rahul Pareek
The information footprints of a rapidly increasing influx of internet users present us with an immense source of information that ultimately contributes to the construction of innovative web technology suitable for the future generations. Likewise, WEB INTELLIGENCE has been presented as the usage of advanced techniques in ARTIFICIAL INTELLIGENCE and INFORMATION TECHNOLOGY for the purpose of exploring, analyzing, and extracting knowledge from web data. Web Intelligence is the particular field of research and development to explore the fundamental roles as well as practical applications of artificial intelligence which is slowly developing throughout the world in order to create next generation of products, services and frameworks based on internet.
An integrated system for an effective e-teaching-learning process
Chidozie Ifeanyi, AdesinaOmolayo, Ayeni Joshua
The problems revolving around the educational system up till now has not or cannot be completely solved, however these can still be minimized/reduced to enhance successful teaching-learning process. There are numerous problems militating against a successful teaching-learning process, however this is limited to the ones observed in AjayiCrowther University. One of the problems identified is students’ inability to ask questions in class and which is identified as being caused by students’ bad diction or lack of boldness or courage. In this paper, we propose a system that will help to eradicate this problem by integrating a chat module into the schools website to allow students engage in remote communication with either their lecturers or other students. The system will have an integrated website that will provide students with self-study objects (such as the e-books, tutorial videos and audios) and object modules such as the chat facility.
Image fusion is an important visualization technique of integrating coherent spatial and temporal information into a compact form. Laplacian fusion is a process that combines regions of images from different sources into a single fused image based on a salience selection rule for each region. In this paper, we proposed an algorithmic approach using a Laplacian and Gaussian pyramid to better localize the selection process. The input images are decomposed by using the Laplacian pyramid transformation. The final fused image can be obtained by using the inverse reconstruction of Laplacian pyramid transformation.
DWT Based Hybrid Image Compression For Smooth Internet Transfers
Reetinder Kaur
The image compression is very necessary and popular technique in course of saving the memory on the local disc or to speed up the internet transfers. The discrete wavelet transform (DWT) has already been proved as the best image compression algorithm. The DWT technique decomposes the image matrix into various sub-matrices to create a compressed image. The new compression technique will be developed by combining the most effective and fast wavelets of DWT technique for image compression. The quality of the new image compression technique will be evaluated using the peak signal to noise ratio (PSNR), mean squared error (MSE), compression ratio (CR) and elapsed time (ET). Also, the new techniques would be compared with the existing image compression techniques on the basis of the latter mentioned parameters.
Today in this real world, where we deal with vast amounts of data, we are in a situation to extract the best among them. Data mining is a new and imminent trend to discover the information and knowledge from the existing data. The gained knowledge can be used for applications ranging from market analysis, fraud detection, and customer retention, to production control and science exploration. It uses machine learning, statistical and visualization techniques to discover the present knowledge in the form which is easily understandable by the users. The data mining tools are used to predict the future trends and behaviours of various business organisations and aid them to make good decisions for their growth and development. They search the databases for hidden patterns, and to make predictions beyond the mind-set of the experts. This paper explains about various open source data mining tools through which efficient predictive analysis can be done.
Energy Structure Theory and Energy Hinded Equation
Yu Han
Essence of energy is the action. The essence of structure is the interaction. There are structures in energy. Interactions between energy and material(or energy) are very complex. To the energy absorbing material has certain limitations. A certain energy E interact M mass of the material to make the material move (energy E converted to material kinetic energy). Because the material limit of energy absorption (this energy E is not completely converted to kinetic energy, part of the energy E will be converted to mass), the part of the energy E can not be absorbed to kinetic energy which is called energy hindered,the symbol is Eh. C is the speed of light in vacuum. ΔM is the quality changed of the material in the interaction of energy.
An High Equipped Image Reconstruction Approach Based On Novel Bregman SR Algorithm Using Morphologic Regularization Technique
Billa Manasa (Pg Scholor), S. Vaishali M.Tech
“Feature extraction” from digital images is an area of concern from few decades and Multiscale morphological operators is considered as successful approach for image processing and feature extraction applications. Although morphological operators is successful in solving the feature extraction but it too has some drawbacks. In this paper we model a non linear regularization method based on multi scale morphology for edge preserving super resolution (SR) image reconstruction. We formulate SR reconstruction problem from low resolution (LR) image as a deblurring and denoising and then solve the inverse problem using Bregman iterations. The proposed Method can be reduce inherent noise generated during low-resolution image formation as well as during SR image estimation efficiently. In our Simulation results show the effectiveness of the proposed method and reconstruction method for SR image.
Reliability Analysis of unreliable MX/G/1 Retrial Queue with Second Optional Service, Setup and Discouragement
Madhu Jain, Deepa Chauhan
Retrial queues have been widely used to model many problems arising in telephone switching systems, telecommunication networks, computer networks and computer systems, etc.. In this paper an Mx/G/1 retrial queue with two phase services, discouragement and general setup time is being studied where the server is subject to breakdown during service. Primary customers join the system according to Poisson process and receive the service immediately if the server is available upon arrival. Otherwise, they enter a retrial orbit with some probability and are queued in the orbit. They repeat their demand after some random interval of time. The customers are allowed to balk upon arrival. All the customers who join the queue have to undergo the first essential service, whereas only some of them demand for the second optional service. Using generating function approach and supplementary variable method, the steady state solutions for some queueing and reliability measures of the system are obtained. The sensitivity analysis has been carried out to explore the effects of system parameters on various performance measures.
Android is a software stack for mobile devices that includes an operating system, middleware and key applications. The android provides the tools and APIs necessary to begin developing applications on the Android platform using programming language. Android is a widely anticipated open source operating system for mobile devices that provides a base operating system, an application middleware layer, a Java software development kit (SDK), and a collection of system applications. Android has a unique security model, which focuses on putting the user in control of the device. Android devices however, don’t all come from one place, the open nature of the platform allows for proprietary extensions and changes.
Our operational semantics provide some necessary foundations to help both users and developers of Android applications deal with their security concerns. One way to provide security in android OS is that we can have the multiuser security system. As android is open source operating system we can download the source and modify accordingly. To add this feature in android we can develop an application of multiuser and the security can be implemented on android operating system.
To implement this facility we are going to make an android application which supports multiple users. After developing this application we can put it in android OS externally or inbuilt as per user requirements. After putting this application in smart phone we can make more than one users. And this will provide more security to your smart phone.
Also provide more security to smart phone we can have location tracking system called as LBS i.e. Location Based Technology. This technology will enables to track the location of unauthorized users. If any unauthorized user enters wrong password more than five times then his photo will be captures(if front camera available) and location will be tracked also location information will be send on registered admin’s email-id.
Digital Image Processing based Detection of Brain Abnormality for Alzheimer’s disease
Dr. PSJ Kumar, Mr. Anirban Saha
Digital medical imaging is highly expensive and complex due to the requirement of proprietary software and expert personnel. This paper introduces a user friendly general and visualization program specially designed in software called MATLAB for detection of brain disorder called Alzheimer’s disease as early as possible. This application will provide quantitative and clinical analysis of digital medical images i.e. MRI scans of brain. Very tiny and minute structural difference of brain may gradually & slowly results in major disorder of brain which may cause Alzheimer’s disease. Here the primary focus is given for detection & diagnosis of Alzheimer’s disease which will be implemented by using bi-cubic interpolation technique.
Rule Mining in Medical Domain Using Swarm based Techniques
Varinder Singh
Inthis paper, swarm intelligence based technique for mining rules over a medical database have been used. Rules are a suitable method for representing real world medical knowledge because of their simplicity, uniformity, transparency, and ease of inference. Swarm Intelligence (SI) has been applied to the rule mining process as its dynamic nature provides flexibility and robustness to process of rule mining. Traditional methods of rule mining generate a large number of rules with too many terms, making the system unusable over medical data. In this paper, the attempt is to use SI as a novel method for discovering interesting rules in the medical domain. The performance of three different swarm based techniques has been compared by observing the output rules of rule sets used to classify data.
Section1 introduces the concept of swarm intelligence and rule mining and how these can be combined. Issues that arise in mining medical data are also briefly listed. Section 2 describes conventional rule mining techniques and states the motivation behind using swarm intelligence for rule mining and classification. Section 3 describes the various SI based algorithms that have been implemented in our study. Section 4 describes the details of the experiment. Section 5 presents the results of the practical experiment followed by conclusions and future scope in section 6.
Shaumika S Rane, Srisakhi Sengupta , Priyanka Kumari, Vishakha M Gandhi
Hand Gesture for Controlling a Robotic Device using a Personal Computer
Shaumika S Rane, Srisakhi Sengupta , Priyanka Kumari, Vishakha M Gandhi
In the field of IMAGE PROCESSING hand gesture recognition is one of the most interesting and challenging genre. Gesture recognition is basically a non-verbal way of communication. Under this title, we will be basically developing a system which recognizes human hand gestures performed in front of the personal computer and uses this gesture for performing movement of a robotic car placed in the vicinity. In this paper web camera is used, after the grabbing of the image we subject the image to image processing algorithms like gray scaling, thresholding , blur, blob detection, segmentation and finally vector calculation. The system will use a single, color camera mounted above a neutral colored desk surface next to the computer. The paper briefly describes the schemes of capturing the image from web camera, image detection, processing the image to recognize the gestures as well as few results. This approach can be implemented in real time system very easily.
In this mechanisms allow users to sign on only once and have their identities automatically verified by each application or service they want to access afterwards. There are many practical and secure single sign-on models even though it is of great importance to current distributed application. Many application architectures required the user to memorized and utilize a different set of credentials (eg, username/password or tokens) for each application he/she wants permission. In this approach is not practical and not secure with the exponential growth in the number of applications and services a user has to access both inside corporative environments . This is a new authentication mechanism that enables a legal user with a single credential to be authenticated by multiple service providers in distributed computer networks. In this paper we proposed a new single sign-on scheme and claimed its security by providing well-organized security arguments. In this paper shows the Chang & Lee scheme and it aims to enhance security using RSA encryption and decryption. The programming part is done using socket programming in Java.Identification of user is an important access control mechanism for client–server networking architectures. The goal of a this platform is to eliminate individual sign on procedures by centralizing user authentication and identity management at a central identity provider. In this paper a SSO the user should seamlessly authenticated to his multiple user accounts (across different systems) once he proves his identity to the identity provider
Review- Challenges in Face Authentication using Extended Principal Component Analysis
Samarjeet Powalkar, Prof. Moresh M.Mukhedkar
As the word become moving towards the globalization in engineering techniques, the capacity and techniques establish an identity of individuals using face as a biometric has become a more important. This paper includes the face recognition using the Extended Principal Component Analysis (EPCA) algorithm. The proposed algorithm uses the concept of PCA and represents an extended version of PCA by using the LDA (Linear Discriminate Analysis) to deal with the problem of operating the multiple numbers of the training images in the database. The Extended version of PCA algorithm may be used in to the face identification for security purpose, criminal face recognition, surveillance authentication etc. The problems related to background, noise and occlusion, and finally speed requirements are also overcome by using this algorithm. This paper focuses on developing a face recognition system using an extended PCA algorithm. The proposed algorithm uses the concept of PCA and represents an improved version of PCA to deal with the problem of orientation and lightening conditions present in the original PCA. Never the less the PCA algorithm is outperforms correct and good result in the less amount of database.
A Survey on Sentiment Analysis And Summarization For Prediction
Vikrant Hole, Mukta Takalikar
Social Networking portals are been widely used by people all over the world, they use it for expressing their views (sentiment) on different issues like products, movie review and also for expressing their opinion on different political parties. These opinion or sentiment can be used for predicting or analyzing the view of people on services which could be beneficial for various companies and political parties to understand their customer and take needful steps for improvising their efficiency. Summarization of tweets will allow understanding hidden events and sentiment with respect to events. In this paper, we firstly discuss the realms which can be predicted with current social media Secondly, we will see how summarization of sentiment with respect to events will allow people of interest to take decision easily.
These instructions provide you guidelines for preparing papers for International Journal of engineering & computer science Hadoop popularly used for processing large amount of data on its distributed programming framework with Hadoop distributed file system (HDFS), but processing sensitive or personal data on distributed environment demands secure computing. Originally Hodoop was invented without any security model. The encryption and decryption are used before writing and reading data from Hadoop distributed file system (HDFS) respectively. Advanced Encryption Standard (AES) enables protection to data at each cluster, it perform encryption/decryption before read/write respectively. Hodoop running on distributed environment, it uses commodity hardware which is a network model require a strong security mechanism, in addition kerberos used for authentication it offers access control list and audit mechanisms, to ensure the data stored in the Hadoop file system is secure.
Review on Data merging and Data movement to accelerate Hadoop performance
Kishor Shinde, Prof. Venkateshan N
Hadoop is popular framework for storing and processing big data in cloud computing. Map-Reduce and HDFS are the two major components of Hadoop. Map reduce is programming model for Hadoop and HDFS is a Hadoop distributed file system which is mainly used as storage component for Hadoop framework. Existing Hadoop system has many performance issues like serialization barrier, repetitive merges, portability issues for different interconnects. An effective and efficient I/O capability is also required for Hadoop. As data size is increasing day by day the performance of Hadoop is becoming a critical issue. To handle large dataset needs to improve the performance by modifying existing Hadoop system. The network levitated merge algorithm is used to avoid repetitive merges. A full pipeline is designed to overlap the Hadoop shuffle merge and reduce phase. Hadoop-A i.e. Hadoop Acceleration framework overcomes the portability issue for different interconnects. It also speeds up data movement and also reduces the disk access.
A Survey on Secure Reversible Data Hiding Techniques in Encrypted Images by Reserving Space In Advance
Anjaly Mohanachandran, Mary Linda P.A
In the current trends of the world, the technologies have advanced so much that most of the individuals prefer using the internet as the primary medium to transfer data from one end to another across the world. The data transition is made very simple, fast and accurate using the internet. However, one of the main problems with sending data over the internet is the security threat it poses i.e. the personal or confidential data can be stolen or hacked in many ways. Therefore it becomes very important to take data security into consideration, as it is one of the most essential factors that need attention during the process of data transferring. There are many research processing techniques related to internet security, cryptography and steganography etc. One of these is data-hiding, using this concept we can provide security, authentication to the system. Data hiding cannot recover original cover. While Reversible data hiding is a novel concept, which can recover original cover without any loss of image. With Reversible Data Hiding (RDH) we can perform embedding operation after encryption. In this technology initially a content-owner creates space for embedding additional data and then encrypts the original image after that data hider module embed additional data in the space created in the encrypted image. At the receiver side, host can extract the data and additional data and recover original message. This concept improves payload & security of the system. This is the basic theme of this concept. Basically this work describes the survey of the reversible data hiding techniques, related methods and procedures that have been developed with the subject.
A Survey on different approaches of CBIR and their combinations
Krishna N. Parekh, Mr. Narendra Limbad
In CBIR main problem is to extract the image features that effectively represent the contents of an image in a database which may be difficult to use, using single feature. Whereas with combination of different features it would give efficient results. The different combination used maybe color-edge; color-shape-texture; color-texture; shape-texture. With more features different aspects of an image could be represented. In all the methods the texture is characterized by the statiscal distribution of the image intensity while the shape features are described with the segmentation of the region or the objects. The combination of CT has faster retrieval method which is robust to scale and translate objects in an image. Also the single feature like Gabor features has less precision than combination features of color moments and the Gabor texture. Also the experiments have proved to increase the efficiency of results.
A Survey on Face Recognition and Facial Expression Identification
Ostawal Prachi Satish, Prof.P.G.Salunke
In this paper, to write a survey of face recognition such as, first to provide the review the existing history, second is to describe the technical details like as methods, approchases, algorithms, etc., third is research opportunity in face recognition field.
We started from the psychophysical studies that how human being perform that work. Then introduced description of techniques and we tried to explain some pre-processing difficulty and its explanation for the face recognition systems.
Text Summarization condenses a document or multiple documents into a smaller version by preserving its information content and the meaning. It is very difficult for a person to manually summarize large documents of text. TextSummarization methods can be classified into extractive and abstractive summarization. There are lots of techniques developed over the years for summarization. In this paper, some of the techniques are studied and reviewed.
Heat Generation and Chemical Reaction Effects On MHD Flow Over An Infinite Vertical Oscillating Porous Plate With Thermal Radiation
B Lavanya, S.Mohammed Ibrahim, A Leela Ratnam
The work is focused on the non linear MHD flow heat and mass transfer characteristics of an incompressible, viscous, electrically conducting and Boussinesq fluid over a vertical oscillating porous permeable plate in presence of homogeneous chemical reaction of first order, thermal radiation and heat generation effects. The problem is solved analytically using the perturbation technique for the velocity, the temperature, and the concentration field. The expression for the skin friction, Nusselt number and Sherwood number are obtained. The effects of various thermo-physical parameters on the velocity, temperature and concentration as well as the skin-friction coefficient, Nusselt number and Sherwood number has been computed numerically and discussed qualitatively.
Creating a Knowledge Base by extracting Top-K lists from the web
Ramya.Y, K.Ramana Reddy
The World Wide Web is currently the largest source of information. However, most information on the web is unstructured text in natural languages, and extracting knowledge from natural language text is very difficult. Still, some information on the web exists as lists or web tables coded with specific tags such as <ul>, <li>, and <table> on html pages. However, it is questionable how much valuable knowledge we can extract from lists and web tables. It is true that the total number of web tables is huge in the entire corpus, but only a very small percentage of them contain useful information. Nowadays we have So many Search engines. These search engines provide top-k lists as search results. The search results contains huge amount of information in which the user is not interested to visit all the pages except the top 2 or 3.Moreover the search results may also contain unwanted data. Hence to avoid this drawback we will be developing a better method for mining top-k lists. In proposed system we use Path Clustering Algorithm which better processes the top-k web page. The system displays only required top lists related to our top-k title. It is very useful to the user which saves user’s time. The extracted lists can also be used as background knowledge for Q/A system. We present an efficient method that extracts top-k lists from web pages with high performance. This system collects top-k lists of various interests which can be called a knowledge base and provides a search option to mine them.
A Review Paper on Collaborative Black Hole Attack in MANET
Barleen Shinh, Manwinder Singh
Ad-hoc networks have become a new standard of wireless communication in infrastructure less environment. MANET is a Mobile Ad-hoc Network in which the nodes get connected with each other without an access point. Messages are exchanged and relayed between nodes. Routing algorithms are utilized for forwarding packets between indirect nodes i.e not in direct range with aid of intermediate nodes. They are spontaneous in nature and absence of centralized system makes them susceptible to various attacks. Black hole attack is one such attack in which a malicious node advertises itself as the best route to the destination node and hinders the normal services provided by the network.
Data Integrity Proofs in Document Management System under Cloud with Multiple Storage
Ms. Payal P. Kilor, Prof. Vijay B. Gadicha
Cloud computing is a web based computing which presents different customers an opportunity to store their data in the cloud. For cloud storage, privacy and security are the burning issues. Storing our data in cloud may not be fully trustworthy. When users store their data on the cloud then there may be a risk of losing the data, or sometimes data may be modified or updated. Cloud storage moves the user’s data to large data centers, which are remotely located, on which user does not have any control. One of the important concerns that need to be addressed is to assure the customer of the integrity i.e. correctness of his data in the cloud. As the data is physically not accessible to the user the cloud should provide a way for the user to check if the integrity of his data is maintained or is compromised. The aim of this paper is to ensure the integrity of the data and provides the proof that data is in secured manner and provide a Cryptographic key to secure the data in the cloud
The need to find a desired image from large database is shared by many professionals. Because of many problems in text-based image retrieval, the new technology was introduced which is known as Content-Based Image Retrieval. Content-Based Image Retrieval is an active research topic since the last decade. Image retrieval based on low level features like color, texture and shape is wide area of research scope. In this paper, all the methods of color, texture and shape features extraction is described. Researchers can combine any of these methods and can get the highest precision and recall by testing different combinations.
Investigating the ways through evaluation practice in higher Education: the value of a learner’s need
Jasleen Kaur, Anjali Bishnoi
The purpose of this study was to achieve new directions towards engineering education. Our survey states that although Traditional Methods (TM) are considered to be good teaching methods but students rated Team Based Methods (TBM) as the most interesting ones. The result of this study indicates that the students are more inclined towards newer methods of teaching like Computer Aided Method (CAM) or Team Based Methods (TBM). For instance, 70% of students from Chemical Engineering and Environmental Science and Technology voted ‘excellent’ for Industrial visit. We can say that they wanted to learn more by practice or by going through the real.
Gender Recognition and Age-group Prediction: A Survey
Mr. Brajesh Patel, Mr. Raghvendra
Over the recent years, a great deal of effort has been made to age estimation & gender recognition from face images. It has been reported that age can be accurately estimated under controlled environment such as frontal faces, no expression, and static lighting conditions. However, it is not straightforward to achieve the same accuracy level in real-world environment because of considerable variations in camera settings, facial poses, and illumination conditions. In this paper, we discuss different method to estimate age and gender predication.
A Framework for Privacy Preserving Collaborative Data Mining
Gottipamula Padmavathi, T.V. Ramanamma
Privacy Preserving Data Mining has become popular now-a-days to restrict the access of data from unauthorized parties i.e., it guarantees the protection of individual records of particular party. In order to make this possible Privacy Preserving Data Mining provides many techniques like Randomization & cryptography. In our application cryptographic technique has been used to provide security to parties data. Specifically, DES algorithm has been used to encrypt and decrypt the data which has been received from different parties and also, apriori algorithm has been used for analysis of collaborative data of multiple parties. Finally, we proposed an algorithm named privacy preserving Collaborative Data Mining (PPCDM) for successful realization of our framework.
Novel Approach for Predicting Performance Degradation in Cloud
Pritam Fulsoundar, Rajesh Ingle
Cloud computing is became an important paradigm in IT sector. Features like on demand service, cost effectiveness, elasticity made it possible. Because of these features and no upfront investment, many organizations using Cloud for their computational needs. But some organizations still worry about quality of service. To address this issue SLA is generated between Cloud provider and client organization. Avoiding SLA violation becomes necessary for cloud provider which leads in necessity of predicting performance degradation so that necessary action to avoid violation could be carried out. In this work we addressed above issue. Our work focuses on CPU cycles as resource to predict performance degradation of Cloud. Our result shows that performance of Cloud degrades when CPU usage of Virtual Machine (VM) running on cloud increases. We proposed system to predict performance degradation in Cloud.
Now a days, Internet has become essential for banking, education, business and many more applications. Users search information on the web using search engines by clicking on hyperlinks or via keyword queries. So to develop user search intent application is challenging, satisfying increased expectations and diverse needs of user. In this paper, we will survey the various techniques used for query suggestion, personalization and FAQ identification. We will review query suggestion by considering query contents, document clicks, query frequency and semantic features. Thus, by automating the optimization process of searching on web; we can minimize user efforts; maximize user satisfaction for getting desired search.
Restoration of Historical Wall Paintings Using Improved Nearest Neighbour Algorithm
Sukhjeet Kaur, Amanpeert Kaur
Wall painting restoration is the process of restoring old and damaged wall painting which have cracks, fold marks ,dark or white spots etc. back to their original or a near-original state. Restoration wall painting is a process of recover the wall painting which are corrupted by many natural phenomena like unfavourable weather conditions, dust, smoke etc. due to which the wall paintings affected by many problems like cracks. To overcome these limitations a new algorithm that is nearest neighbour algorithm is used that can serve both the tasks of detecting and removing the cracks, so the quality of the wall painting images can be improved. For more improvement in the quality of digital wall painting ,another deformity is considered that is white spots which are detected as well as removed and nearest neighbour algorithm is improved by increasing the contrast and saturation. The nearest neighbour algorithm provide the more accurate result as compared to SIHF algorithm based on parameters that are Peak signal to noise ratio (PSNR) and mean squared error (MSE). The nearest neighbour algorithm gives the more accurate results than the SIHF algorithm as it remove the more number of cracks and white spots.
Internet crime is crime committed on the Internet, using the Internet and by means of the Internet”.Since the introduction of the Internet in the 1950’s, the world has been introduced to innovative technology that has seemingly enhanced our lives. It has brought convenience to the busy lifestyle. However, this continuum of boundless information has become a portal for Internet criminals to breach the barriers of online privacy and protection. Internet crimes (also termed as ‘computer’ or ‘web’ or ‘cyber’ crimes) are being viewed as one of the greatest challenges of the 21st century. From child pornography and sex crimes to Internet theft and computer fraud, prosecutors and investigators are aggressively confronting alltypes of computer and web crime.
Discussion on Very Small Aperture Terminal Networks
Er. Anup Lal Yadav, Er. Sahil Verma, Er. Kavita
Recent advances in technology have given a new thrust to the satellite communication industry by deploying low cost very small aperture terminal (VSAT) network for data, voice, and video communication. VSAT networks are characterized by their low cost per node, integrated network management facilities, and support of multiple applications and integrated communication services. During the last five years, VSATs’ evolution has brought a significant change in the satellite communications industry, both in its current product offerings and future development strategies. In this paper, we propose an introduction to VSAT, architecture design of a VSAT network. Future trends in VSAT technology are also discussed
Design of Low Power 9T Full Adder Based 4*4 Wallace Tree Multiplier
R.Naveen, K.Thanushkodi, R.Preethi, C.Saranya
Multiplier is an important key element used for arithmetic operations in digital signal processor. Power consumption in multiplier is more when compared with adders and subtractors. So reducing the power consumption of multiplier makes a digital signal processor more efficient. A Wallace tree multiplier is an efficient high speed multiplier that multiplies two integers. Here a 4*4 Wallace tree multiplier is proposed with ten full adders. Each full adder used in this design has only nine transistor which is less in number when compared with the conventional full adders. Due to this the power consumption of full adder block is reduced, such that power consumption of 4*4 Wallace Tree Multiplier will be reduced. The proposed design is simulated using 0.12µm technology in Microwind 2 Tool and has achieved upto 50% power saving in comparison to the Wallace Tree Multiplier that has been designed using Conventional Full adder.
India is presently generating construction and demolition (C & D) waste to the tune of 23.75million tones annually and these figures are likely to double fold in the next 7 years. C & D waste and specifically concrete has been seen as a resource in developed countries. Works on recycling have emphasized that if old concrete has to be used in second generation concrete, the product should adhere to the required compressive strength. This paper deals with the review of the existing literature work for the use of recycled concrete as aggregates in concrete in respect of mainly the compressive strength and proposes and approach for use of recycled concrete aggregate without compromising the strength. The need for demolition, repairs and renewal of concrete and masonry structures is rising all over the world, more so in the developing countries. It is highly desirable that the waste materials of concrete and bricks of further reutilized after the demolition of old structures in an effective manner especially realizing that it will help in reducing the environmental damages caused by excessive reckless quarrying for earth materials and stones. Secondly, this will reduce pressure on finding new dumping ground for these wastes, thus further saving the natural environment and ecosystem. This paper critically examines such properties in reused concrete and brick masonry waste materials and suggests suitable recommendations for further enhancing life of such structures thereby resulting in sufficient economy to the cost of buildings.
Using data mining in prediction of educational status
Samira Talebi, Ali AsgharSayficar
The aim of this paper is to predict the students’ academic performance. It is useful for identifying weak students at an earlier stage. In this study, we used WEKA open source data mining tool to analyze attributes for predicting students’ academic performance. The data set comprised of 180 student records and 21attributes of students registered between year 2010 and 2013. We chosethem from AZADUniversity of Mashhad .We applied the data set to four classifiers (Naive Bayes, LBR,NBTree,Best-First Decision Tree) and obtained the accuracy of predicting the students’ performance into either successful or unsuccessful class. The student's academic performance can be predicted by using past experience knowledge discovered from the existing database. A cross-validation with 10 folds was used to evaluate the prediction accuracy. The result showed that Naive Bayes classifier scored the higher percentage of prediction F-Measure of 88.7%.
Anuraag Vikram Kate, Nikilav P V , Giriesh S, Hari Prasath R S, Naren J
Multimedia Data Mining- A Survey
Anuraag Vikram Kate, Nikilav P V , Giriesh S, Hari Prasath R S, Naren J
In the recent years, data quarrying or mining has been an effective as well as powerful approach for extracting concealed knowledge from huge collections of regulated digital data stored in databases. From the inception, data mining was being done predominantly on the numerical set of data. Nowadays as large multimedia data sets such as audio, speech, text, web, image, video and combinations of several types are becoming increasingly available, and are almost unstructured or semi-structured data by nature, it difficult for us to extract the information without powerful tools. This drives the need to develop data mining techniques that can work on all kinds of data such as documents, images, and signals. This paper sightsees survey of the current state of multimedia data mining and knowledge discovery, data mining efforts aimed at multimedia data, current approaches and well known techniques for mining multimedia data.
A. senthil kumari.P, B kapalishwari.T, C.tamilarasi.M
Optimized super pixel segmentation for natural image using lazy random walk algorithm
A. senthil kumari.P, B kapalishwari.T, C.tamilarasi.M
Image super pixel segmentation approach using the lazy random walk (LRW) algorithm with self-loops has the qualities of segmenting the pathetic restrictions and convoluted texture regions as extremely glowing by the new global probability maps and the transform instance policy. Our technique begins with initializing the seed positions and runs the LRW algorithm on the input image to gain the probabilities of each pixel. Then boundaries of original super pixels are obtained according to the probability and the commute time. Theoriginal super pixels are iteratively optimized by the energy utility, which is defined on the commute time and the texture quantity.The performance of super pixel is improved by relocating the midpoint positions of super pixels and isolating the large super pixels into miniature ones with optimization algorithm. The experimental results have confirmed that our method achieves recoveredact than preceding super pixel approaches.
A Literature Survey on performance evaluation of query processing on encrypted database
Rajendra H. Rathod , Dr.C.A.Dhote
All database systems must be able to respond to requests for information from the user that is process queries. Secure and efficient algorithms are needed that provide the ability to query over encrypted database and allow optimized encryption and decryption of data. Data encryption is a strong option for security of data in database and especially in those organizations where security risks are high. Firms outsourcing their databases to untrusted parties started to look for new ways to securely store data and efficiently query them. When we apply encryption on database then we should compromise between the security and efficient query processing, because the operations of encryption and decryption greatly degrade query performance.
This paper presents a survey on various encryption algorithms and identifies many of the common issues, themes, and approaches that cover query processing. Also in this paper a survey is made based on performance of query processing as well as pros and cons of existing algorithms.
Prof. C. M. Jadhav, Prof. Amruta. R. Shegadar, Prof. S. Shabade
Dual-Link Failure Resiliency through Backup Link Mutual Exclusion
Prof. C. M. Jadhav, Prof. Amruta. R. Shegadar, Prof. S. Shabade
In every network we see the link failures are common. For this purpose, we propose networks having the scheme to protect their links against the link failures. Networks use link protection to achieve fast recovery from link failures. While the first link failure can be protected using link protection (or defining back up node), there are several alternatives for protecting against the second failure. We formally classify the approaches to dual-link failure resiliency. One of the strategies to recover from dual-link failures is to employ link protection (or back) for the two failed links independently, which requires that two links may not use each other in their backup paths if they may fail simultaneously. Such a requirement is referred to as backup link mutual exclusion (BLME) constraint and the problem of identifying a backup path for every link that satisfies the above requirement is referred to as the BLME problem due to finding new link the senders time is out and we have the problem of packet loss. In this we use Backup link mutual exclusion (BLME), when the links fail simultaneously. The solution methodologies for BLME problem is 1).for mulating the backup path selection as an integer linear program;2)developing a polynomial time heuristic based on minimum cost path routing.
An Efficient Content Based Image Retrieval System for Sketches by Using PAM Algorithm
A. Ravi Kumar, A. Yuva Krishna
In this paper we propose a new and effective image retrieval scheme using color, texture and shape features based on K-medoids. Image is predefined by using fast color quantization algorithm with cluster merging. A small number of dominant colors and their percentage can be obtained. The spatial features are extracted using clustering methods. This offers an efficient and flexible approximation of early processing in the human visual system (HVS). It provides better feature representation and more robust to noise then other representations. Finally, the combination of the color, shape and texture feature provides a robust feature set for image retrieval. Experimental result shows that the proposed method provides a better color image retrieval. It provides accurate and effective in retrieving the user-interested images.
In this color mixing machine we use three tanks which are of Red, Blue and Yellow. The three tanks consist of level sensor and fitted with hydraulic line. The hydraulic line consist of solenoid valve.The tank consists of color Steiner of Red, Blue and Yellow. The Steiner flows from hydraulic line into the mixing tank through solenoid valve. The solenoid valve is a digital output device that is connected to the PLC .Their is three solenoid valves connected to three tanks. The white color tint is in which is tank 4 of required proportion (as per order). The weight of the mixing tank is calculated by a weight measuring sensor i.e. load cell. The load cell is an analog type output device.
An Optimize Mechanism for Multifunction Diagnosis of Kidneys by using Genetic Algorithm
Miss.Vaishali M. Sawale, Prof. A.D. Chokhat
In existing system, it was take more time (in minute) to detect and the output was less accurate. The medical technicians laboratory adjust rules and parameters (stored as “templates”) for the included “automatic recognition framework” to achieve results which are closest to those of the clinicians. These parameters can later be used by non experts to achieve increased automation in the identification process. The system’s performance was tested on MRI datasets, while the “automatic 3-D models” created were this research presents a multifunctional platform focusing on the clinical diagnosis of kidneys and their pathology (tumors, stones and cysts), using a “genetic algorithm”. This research presents the automatic tumor detection (ATD) platform: a new system to support a method for increased automation of kidney detection as well as their abnormalities (tumors, stones and cysts). As a first step, specialist clinicians guide the system by accurately annotating validated against the “3-D golden standard models.” Results are promising to give the average accuracy of 97.2% in successfully identifying kidneys and 96.1% of their abnormalities thus outperforming existing methods both in accuracy and in processing time needed. In this paper, the proposed design will define the “genetic algorithm” which will generate the output within a second and more accurate than the existing system.
Development of Cognitive Architecture using Ambient and Swarm Intelligence for the Agents
Ashwini K, Dr M V Vijayakumar
This paper demonstrates on how to design an ambient environment for a group of agents called Swarm exhibiting the Swarm Intelligence. Ambient and Swarm Intelligence areemerging technologies in the field of Artificial Intelligence. The Proposed Ambient Environment aim is to ensure that bad agent should be avoided from collecting the resources. Swarm is a group of homogenous individuals, which interact locally among themselves and their surrounding environment, with no centralized control and from which interesting global behavior emerges. The proposed Swarm is group of agents had to collectively collect the resources from the environment and together has to construct the bridge. The main aim of this research is to check the two parameters- Coordination and Performance. How much coordination to be given so that maximum performance is achieved. Thus concluded that coordination is directly proportional to the performance.
OpthoABM-An Intelligent Agent Based Model for Diagnosis of Ophthalmic Diseases
Falguni Ranadive, Prof. Priyanka Sharma
The Ophthalmic diseases lack fatality, but have tendency to progress overtime (morbidity) and have more impact on daily life of the patients. Diseases like Glaucoma and Diabetic Retinopathy have chronic course which can be sight-threatening if not intervened in due course and proper follow-up. The diagnosis in such diseases, require examinations by optometry instruments following the symptoms. An integrated interpretation of these examination results are needed as they have different representation and their significance, to reach final diagnosis. Currently this interpretation is dependent on individual attributes such as, past experience and domain knowledge of the expert. Thus there arises a need of automated diagnosis free from individual attributes and can produce integrated interpretation. This paper proposes an Intelligent Agent Based Model for diagnosis of ophthalmologic diseases. The intelligent agents of the model are specialized and use advance computational techniques for deriving diagnosis from numerical data and images, thus combines autonomy and communicative characteristics of agents to reach to a final diagnosis.
Tracking Online Assessments through Information Visualization
T.Sudha Rani, Anusha.Y
We present an strategy and a program to let instructors observe several main reasons relevant to on the internet assessments, such as student actions and analyze excellent. The strategy includes the signing of essential info relevant to student connections with the program during the performance of on the internet assessments and uses information creation to emphasize information useful to let instructors review and improve the whole evaluation process. We have focused on the development of actions styles of students and conceptual connections among analyze items. Furthermore, we have led several tests in our staff in order to evaluate the whole strategy. In particular, by assessing the information creation maps, we have recognized several previously unknown analyze strategies used by the students. Last, we have recognized several connections among questions, which gave us useful reviews on the analyze excellent.
Latent Overlapped Fingerprint Matching Using Level-2 and Level-3 Features Refinement
Rajasekar .T , Uma Maheswari.N
Fingerprint is the most important for the identification of crime scene investigation. Fingerprint technology is not a destroyable one at any time from the early of 20th century. A latent fingerprint is an important main source of the forensic evidences in the court of law. Overlapped finger prints are encountered in latent fingerprints. An automatic match of both latent fingerprints and rolled or plain fingerprints with higher accuracy is important for these applications. A latent impression is of typically of bad quality with detectable background noise which makes the feature extracting and matching of the latent is a notable problem. The problem has been solved by performance gain via feedback from exemplar prints. To solve the problem level-2 and level-3 were proposed. In Level -2 features like minutiae ridge ending, bifurcation is to be introduced. Similarly the level-3 pores like open pores, closed pores fingerprint images also to be introduced.
It is known that exposure to loud noise for prolonged periods of times is dangerous and can affect quality of life with reference to health and psychology. In severe cases it may result in damage or permanent loss of hearing. Exposure to 85 dB noise for more than eight hours and 100 dB noise for more than two hours is considered to be dangerous for humans. We studied noise due to machinery in use with heavy industry and present finding and details on noise due to Transfer Press Machine used in forming of heavy metal sheets.
Face Recognition Using Image Processing Techniques: A Survey
Selvapriya.M , Dr.J.KomalaLakshmi
Face detection is becoming one of the most interesting topics in the computer vision literature. The survey is conducted to analyze the face recognition techniques and timeline view on different methods to handle general face recognition problems. Image processing techniques focuses on two major tasks such as Improvement of pictorial information for human interpretation and processing of image data for storage, transmission and representation for autonomous machine perception. One of the common fundamental techniques that facilitate natural human-computer interaction (HCI) is face detection. In this paper, the survey is made based on a comparison of the recent advances in face detection using various image processing techniques such as Eigen faces, Hidden Markov Model(HMM), Geometric based algorithm and template matching algorithms. These techniques improve quality, removes noise, versatile in nature, and preserves original data precision of the image.
Proper Relay Reduction And Route Breakage Prediction In Blueooth Scatternet Scheduling
Suganya.K, Mrs.A.Nirmala
Bluetooth is a promising new technology for short range wireless connectivity between mobile devices. By constructing a piconet, Bluetooth device establishes link and communicates with other devices in a master–slave manner. The piconets are combined to form the Scatternet and communicate through the Relay /bridge node. So the performance of the Scatternet highly depends on the relays and its degree and mobility. The unnecessary relay causes scheduling overhead and inefficient use of limited resources. Thus, optimum number of relays should be maintained without any control overhead. The proposed method achieves this goal through relay reduction with load balancing and route breakage prediction. Therefore by implementing the proposed protocol the Scatternet performance will get improved due to reduced packet loss and route recovery time.
Denial-of-Service (DoS) attacks pose a significant threat to the Internet today especially if they are distributed, i.e., launched simultaneously at a large number of systems. Reactive techniques that try to detect such an attack and throttle down malicious traffic prevail today but usually require an additional infrastructure to be really effective. In this paper we show that preventive mechanisms can be as effective with much less effort: We present an approach to (distributed) DoS attack prevention that is based on the observation that coordinated automated activity by many hosts needs a mechanism to remotely control them. To prevent such attacks, it is therefore possible to identify, infiltrate and analyze this remote control mechanism and to stop it in an automated fashion. We show that this method can be realized in the Internet by describing how we infiltrated and tracked distributed denial of service attacks using hybrid peer to peer botnets monitoring system.
Evaluating Effectiveness of 3D object picking algorithms in Non immersive virtual world
Dr.M.Mohamed Sathik, K.Merriliance
As ‘3D object picking in non immersive virtual world’ is an essential interaction technique in virtual environment, it is performed using a 3D mouse or an interactive glove to explore and interact with any of the objects. In this paper we focused the advantages of picking using bounding box and pick cube algorithm over picking using bounding sphere and pick sphere algorithm .The usefulness and effectiveness of the proposed evaluation measures are shown by reporting the performance evaluation of two algorithms. We then compare the application of both algorithms with related work to demonstrate that they are more suitable. These analytical studies provide distinct advantages in terms of ease of use and efficiency because they consider the tasks of object picking effective application-independent picking technique for various input devices. In this paper, cube and sphere are used to represent the intersection point of the ray coming from the user input and the object to be picked. By intersecting the objects in the scene with the ray with a pick cube at the intersection point, it is determined which one is picked. An object is selected when this pick cube intersects the object’s bounding box.
Reliable Techniques for Data Transfer in Wireless Sensor Networks
S.Lavanya, Dr. S.Prakasm
Reliable delivery of data is a key challenge in Wireless Sensor Networks (WSN). This paper discusses various challenges to achieve data reliability. Various data transport protocols that deliver data in both upstream and downstream directions are discussed in detail. Furthermore the existing data transport reliability protocols are analyzed based on various reliability levels. Techniques based on retransmission and redundancies are discussed. Retransmission techniques seem to be efficient but leads to overhead. Redundancy techniques like erasure code, Reed Solomon codes and route fix seems to be the best alternative in a resource constrained WSN. Finally it has been stated that the right combination of retransmission and redundancy techniques helps to achieve high reliability. On the basis of the analysis, few research challenges to achieve reliability are pointed out.
Preventing Senstive Social Network Data By Using UINN Algorithm
Tummala Surya Padma , B.Venkata Reddy
Publishing or sharing the social network data for social science research and business analysis lack of privacy. Existing technique k-anonymity is used to prevent identification of microdata. Even though an attacker may gaining sensitive data if a group of nodes largely share the same sensitive labels. We propose an algorithm, universal –match based Indirect Noise Node which makes use of noise nodes to preserve utilities of the original graph. Finally that technique prevents an attacker from reidentifying a user and finding the fact that a certain user has a specific sensitive value.
FIS for Edge detection based on the method of convolution and a mixture of Gaussian and triangular membership functions
Archana R Priyadarshini
Fuzzy logic is a technique of emboding human like thinking into a control system.These find applications in modeling human brain and in the creation of behavioural systems.Fuzzy Logic based systems provides a more efficient and resourceful way to solve Control Systems. These systems are been used in business, hybrid modelling , Air conditioners, video game artificial intelligence , pattern recognition in remote sensing and expert systems. Mainly the use of Triangular and Gaussian membership functions for FIS based edge detection is to provide simplicity, reliability and robustness.
Metadata Construction Model for Web Videos: A Domain Specific Approach
Siddu P. Algur, Prashant Bhat, Suraj Jain
The advances in computer and information technology together with the rapid evolution of multimedia data are resulted in the huge growth of the digital video. Due to the rapid growth of digital data and video database over the Internet, it is becoming very important to extract useful information from visual data. The scientific community has increased the amount of research into new technologies, with a view to improve the digital video utilization: its archiving, indexing, accessibility, acquisition, store and even its process and usability. All these parts of the video utilization entail the necessity of the extraction of all important information of a video, especially in cases of lack of metadata information. This paper describes importance of descriptive metadata of video, categorization of video contents based on video descriptive metadata, and high level metadata based web video modeling. The main goal of this paper is to construct high level descriptive metadata with and without timeline for all the category of videos and extraction of metadata from web videos such as YouTube and Face Book. By using the high level descriptive metadata information, a user is facilitated on the one hand to locate a specific video and on the other hand is able to comprehend rapidly the basic points and generally, the main concept of a video without the need to watch the whole of it.
Evaluating the key findings of Image Fusion Techniques
Harkamal preet kaur, Sunny Dhawan
An objective of digital image fusion combine two or more source images to get a single digital image known as fused image. This fused image will be more informative for human as well as machine also. We can used the different methods of fusion has been proposed in literature both in spatial domain and wavelet domain. This paper will be propose a pixel level image fusion using multiresolution (BWT). The two more methods we can use in this paper is DCT and PCA. PCA based on fusion and sharpness of fused image. PCA performance evalution of proposed method used matric like: fusion factor, entropy, standard deviation, to get the better results. DCT will be suitable and time saving in real time system for fusion of multifocus images based on variance calculation.
A Research Paper on Content Based Image Retrieval System using Improved SVM Technique
Deepu Rani, Monica Goyal
Content-based image retrieval utilizes representations of features that are automatically extracted from the images themselves. Allmost all of the current CBIR systems allow for querying-by-example, a technique wherein an image (or part of an image) is selected by the user as the query. The system extracts the feature of the query image, searches the database for images with similar features, and exhibits relevant images to the user in order of similarity to the query. In this context, content includes among other features, perceptual properties such as texture, color, shape, and spatial relationships. Many CBIR systems have been developed that compare, analyze and retrieve images based on one or more of these features. Some systems have achieved various degrees of success by combining both content-based and text-based retrieval. In all cases, however, there has been no definitive conclusion as to what features provide the best retrieval. In this paper we present a modified SVM technique to retrieve the images similar to the query image.
An Efficient Technique for Parallel CRC Generation
CH. Janakiram, K.N.H.Srinivas
Cyclic Redundancy Check is playing a vital role in the networking environment to detect the errors.With challenging speed of transmitting data and to synchronize with speed, it’s necessary to increase speed of CRC generation. This paper presents 64 bits parallel CRC architecture based on F-matrix with order of generator polynomial is 32. Implemented design is hardware efficient and requires 50% less cycles to generate CRC with same order of generator polynomial. CRC32 bit is used in Ethernet frame for error detection. The whole design is functionally developed and verified using Xilinx ISE 12.3i Simulator.
Duplicate Detection Algorithm In Hierarchical Data Using Efficient And Effective Network Pruning Algorithm: Survey
Ashwini G Rathod, Vinod S Wadne
Duplicate detection consists in detecting multiple type of representations of a same object, and that for every object represented in a database source. Duplicate detection is relevant in data cleaning and data integration applications and has been studied extensively for relational data describing a single type of object in a single data table. The main aim of the project is to detect the duplicate in the structured data. Proposed system focus on a specific type of error, namely fuzzy duplicates, or duplicates for short name .The problem of detecting duplicate entities that describe the same real-world object is an important data cleansing task, which is important to improve data quality. The data which stored in a flat relation has numerous solutions to such type of problem exist.
Duplicate detection, which is an important subtask of data cleaning, which includes identifying multiple representations of a same real-world object. Numerous approaches are there for relational and XML data. Their goal is to either on improving the quality of the detected duplicates (effectiveness) or on saving computation time (efficiency)
Tree Detection for Urban Environment Using Watershed Segmentation
Priyanka Garg, Mukesh Rawat
Devastation of trees at urban level is expanding quickly which is prompting incredible natural anxiety. This needs to be controlled to save and keep up the environmental offset of earth. Exact urban vegetation information is obliged to oversee green assets so regions can keep the track of the venture done in urban ranger service exercises. The study exhibits a methodology to gauge the tree populace by dividing the VHR information with the assistance of marker controlled watershed algorithm. It sections all the trees and different objects uniquely. Implementation on the basis of texture algorithm, color based algorithm, mean shift algorithm and threshold can also be performed.
A Survey on Dynamic Load Balancing Strategies for A Distributed Computer System
B.Srimathi, Dr.M.Ravindran
A distributed system consists of independent workstations connected usually by a local area network. The IT infrastructure is playing an increasingly important role in the success of a business. Market share, customer satisfaction and company image are all intertwined with the consistent availability of a company’s web site. Network servers are now frequently used to host ERP, e-commerce and a myriad of other applications. The foundation of these sites, the e-business infrastructure is expected to provide high performance, high availability, secure and scalable solutions to support all applications at all times. However, the availability of these applications is often threatened by network overloads as well as server and application failures. Resource utilization is often out of balance, resulting in the low-performance resources being overloaded with requests while the high-performance resources remain idle. Server load balancing is a widely adopted solution to performance and availability problems. This paper describes a survey on dynamic load balancing strategies for a distributed environment.