Using image stitching and image steganography security can be provided to any image which has to be sent over the network or transferred using any electronic mode. There is a message and a secret image that has to be sent. The secret image is divided into parts. The first phase is the Encrypting Phase, which deals with the process of converting the actual secret message into cipher text using the AES algorithm. In the second phase which is the Embedding Phase, the cipher text is embedded into any part of the secret image that is to be sent. Third phase is the Hiding Phase, where steganography is performed on the output image of Embedding Phase and other parts of the image where the parts are camouflaged by another image using least significant bit replacement. These individual parts are sent to the concerned receiver. At the receivers end decryption of Hiding phase and Embedding Phase takes place respectively. The parts obtained are stitched together using k nearest method. Using SIFT features the quality of the image is improved.
Ms.Ashwini Thool, Prof. Prashant R. Indurkar , Prof.S.M.Sakhare
Fuzzy And Level Sets Based Design Approch Fof Brain Tumor Segmentation
Ms.Ashwini Thool, Prof. Prashant R. Indurkar , Prof.S.M.Sakhare
This paper present deals with the implemention of Image enhancement, Linear contrast stretching, filtered image and bias field correction also Estimated that image. Normally the Brain can be viewed by the MRI scan or CT scan. MRI scanned image is used for the entire process. The MRI scan is more comfortable than any other scans for diagnosis. It will not affect the human body, because it doesn’t practice any radiation. Modified fuzzy C means (MFCM) and level sets segmentation based methodology is proposed In future method.It allows with high accuracy and reproducibility comparable to segmentation that also reduces the time for analysis also detection of range and shape of tumor in brain MRI Image.
Recently, in the world of digital technologies especially in computer world, there is a tremendous increase in crimes. So investigation of such cases deserves a much more importance. Computer Investigation is the process of uncovering and interpreting process of electronic data for use in computer forensic analysis. Text clustering is an automatic grouping of text documents into clusters so that documents within a cluster have high similarity in comparison to one another, but are dissimilar to documents in other clusters. We are carrying out extensive experimentation with two well-known clustering algorithms K-means and Hierarchical single/Complete/Average Link. In this the number of unstructured text files is given as input and the output is generated in structured format. Previous experiments have been performed with different combinations of algorithms and parameters. Related studies in the literature are significantly more limited than our study. Our studies show that the K-means and Hierarchical algorithms provide the best results for our application domain.
Prof. Sulbha Ghadling,Kanchan Belavadi, Shubhangi Bhegade, Pradnya Ghojage, Supriya Kamble
Digital Library: Using Hybrid Book Recommendation Engine
Prof. Sulbha Ghadling,Kanchan Belavadi, Shubhangi Bhegade, Pradnya Ghojage, Supriya Kamble
Recommender systems have emerged as critical tools that help alleviate the burden of information overload for users. Since these systems have to deal with a variety of modes of user interactions, collaborative recommendation must be sensitive to a user’s specific context and changing interests over time. Our approach is to build a digital library system containing soft copies in form of PDF, PPT etc. To make it more efficient we are adding recommendation system that will suggest books to students on their issuance history, rating patterns and viewing of books. Our idea here is to apply a hybrid recommendation system that combines effective individual recommendation algorithms.
Re-Ranking Based Image Categorization Using Saliency Driven Nonlinear Diffusion Filtering -A Review
Ms. Nisha Barsagade, Prof. Mr. Mayur Dhait
The new technique is proposed for Re-ranking based image categorization using saliency driven nonlinear diffusion filtering. The image categorize based on Re-ranking, using multi-scale information fusion based on the original image. The foreground features, which are important for image categorization and the background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. To preserve the foreground feature and deal effectively with background use saliency driven nonlinear diffusion filtering and Re-ranking categorize the image.
Review of Social Collaborative Filtering Recommender System’s Methods
Pratibha Yadav, Kumari Seema Rani, Sonia Kumari
Recommender Systems plays a vital role in e-commerce. The goal of recommender system is to present the user with the personalized information that matches with the user’s interest. Now a days, user’s interest is leaning towards social networks. Social Networking Sites provide users a platform to connect and share their information with other users who share similar interests with user. The popularity of social networking sites is increasing day by day. Recommender systems are now using the social information for their analysis and prediction process. Collaborative filtering approach is assumed to be the broadly approved technique of recommender system. Collaborative filtering method recommends an item to a user based on the preferences of other users who share analogous interest with the active user. In this paper, we have presented a study of collaborative filtering based social recommender system.
Prof. Ashvini Jadhav, Pratik P. Raut, Rohan S. Ghorpade, Vishal K. Raut, Chinmay Y. Kale
Fast File Downloading Using Network Coding in Distributed System
Prof. Ashvini Jadhav, Pratik P. Raut, Rohan S. Ghorpade, Vishal K. Raut, Chinmay Y. Kale
Downloading any file quickly is a part of our social as well as professional living. The modern society can not work without various downloading facilities. There are many systems which are providing quick downloading of large files for personal and corporate world. If the server has 1GB data with the 100Mb/S transfer speed then theoretically it will take 10 to 20 seconds to transfer data to client PC. But it actually takes 60 to 90 seconds. It gives only 10% to 20% of actual throughput.
Proposed system will provide facility for downloading file within minimum time with help of Network Coding. So that the client who will download the file for very first time, he will download it from server. When the other user will try to download it then the first downloaded image will work as replica for the current downloading. This system will use parallel file downloading protocols. The server will keep record of files present on every client in a tabular format. The concept of Distributed Storage can also be implemented. Hence the proposed system will increase the throughput upto 30% to 40%.
Mr. Imran D. Tamboli, Prof. Ranjana R. Badre, Prof. Rajeshwari M. Goudar
A Survey-Decentralized Access Control with Anonymous Authenti-cation and Deduplication of Data Stored in Clouds
Mr. Imran D. Tamboli, Prof. Ranjana R. Badre, Prof. Rajeshwari M. Goudar
Cloud computing is a rising computing standard in which assets of the computing framework are given as a service over the Internet. It delivers new challenges for data security and access control when clients outsource sensitive data for offering on cloud servers. These results unavoidably present a substantial processing overhead on the data possessor for key distribution and data administration when fine-grained data access control is in demand, and subsequently don't scale well. The issue of at the same time accomplishing fine-graininess, scalability, and data confidentiality of access control really still remains uncertain. The system characterize and implements access policies based on data qualities, and, then again, permits the data owner to represent the majority of the calculation included in fine-grained data access control to un-trusted cloud servers without unveiling the underlying data substance. This can be achieved by exploiting and combining techniques of decentralized key policy Attribute Based Encryption (KP-ABE). The proposed approach is highly efficient and secure. Also Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing it also represents several new deduplication constructions supporting authorized duplicate check in hybrid decentralized cloud architecture.
Position Based On Refining Aggregate Recommendation Assortment
L. Pravin Kumar, Ramesh Krishnan
Recommendation systems are becoming necessary for individual user and also for providing recommendations at individual level in various types of businesses. Recommender system is a personalized information filtering technique used to identify desired number of items based on interest of user. The system uses data on past user ratings by applying various techniques. This techniques concentrate to improve accuracy in recommendations, with recommendation accuracy it is also necessary improve aggregate diversity of recommendation. In this paper, we proposed number of item ranking techniques and different ratings prediction algorithm to improve recommendation accuracy and aggregate diversity by using real-world rating dataset.
Introducing the new information of massive information for belief apprehension of large-volume, complex, growing information sets with many autonomous sources. HACE theorem that characterizes the options of massive information revolution and perform the operation in data processing perspective.huge information e-Health Service application has secure to rework the whole care cardiovascular disease method to become additional economical, less costly and better quality. This application involves data-driven model demand-driven aggregation of knowledge sources. huge information is reworking care, business, as e-Health cardiovascular disease becomes one among key driving factors throughout the innovation method. look at BDeHS (Big information e-Health Service) to fulfil the massive information applications within the e-Health service domain. Existing data processing technologies such can not be merely applied to e-Health services directly. Our style of the BDeHS for cardiovascular disease that provides information operation management capabilities and e-Health meaningful usages.
A Survey paper on An Effective Analytical Approaches for Detecting Outlier in Continuous Time Variant Data Stream
Mr. Raghav M. Purankar , Prof. Pragati Patil
Outlier detection and is an important branch of data mining. Data mining is extensively studied field of research area; where most of the work is focused on the information discovery. A data stream is a massive sequence of data objects continuously generated at much faster rate. There are various approaches and methods are used for outlier detection. Some of them use K-Means algorithm for outlier detection in data streams which help to create a similar group or cluster of data points. The K-means algorithm is the best known partitioned clustering algorithm. As we know that streaming data often fails to scan the multiple items and also the new concepts may keep evolving in coming data over time hence the outlier detection plays the challenging role in the streaming data. The irrelevant attributes can be termed as noisy attribute at the time of working with the data streams objects and such attributes imposes the challenge. In high dimensional data the number of attributes associated with the dataset is very large and it makes the dataset unmanageable. Clustering is a data stream mining task which is very useful to gain insight of data and data characteristics. Clustering is also used as a pre-processing step in over all mining process for an example clustering is used for outlier detection and for building and development of Hybrid approach. Purpose of this paper is to review of Hybrid approach of outlier detection with others approach which uses K-Means algorithm for clustering dataset with some other techniques like Euclidean distance approach. Various application domains of outlier detection are discussed in this paper.
Stability Analysis of A Non Vaccinated Sir Epidemic Model
Prof. Dr. Sumit Kumar Banerjee
A proper mathematical model structure is required to understand the dynamics of the spread of infectious diseases. In this paper I have discussed about a general SIR epidemic model which represents the direct transmission of infectious disease. Local and global stabilities of both the disease free and the endemic equilibrium are derived by using the evaluated reproduction number.
Programming Paradigms in the Context of the Programming Language
Sonia Kumari, Pratibha Yadav Kumari Seema Rani
The choice of the first programming language and the corresponding programming paradigm is critical development of a programmer. In computer science, several programming paradigms can be recognized. There is the huge number of programming languages introduced over the last fifty years, the key issues in programming education remain the same and choosing appropriate programming language is still challenging. In this paper we overview some of the most important issues relevant for programming, the challenges in programming both in terms of programming paradigms and in terms of the programming languages. In this paper, we have also overviewed the concept of abstract machine and operations performed by the interpreter. Some results about the usage of programming language are also presented.
Ontologies are used in various fields such as knowledge management, information extraction and semantic web. From the point of view of a particular criterion of application, the problem mainly faced is to determine that which of the ontologies would best suit a particular problem. The (re)use of ontologies without anomalies is a critical point in the industrial area in order to produce successful projects; hence the selection of an evaluation technique is mandatory. This paper presents the comparison of various evaluation methods based on different parameters that will help the users to select the best one based on the situation they are in.
A Review Of Inferring Latent Attributes From Twitter
Surabhi Singh Ludu
This paper reviews literature from 2011 to 2013 on how Latent attributes like gender, political leaning etc. can be inferred from a person's twitter and neighborhood data. Prediction of demographic data can bring value to businesses, can prove instrumental in legal investigation. Moreover, political leanings and ethnicity can be inferred from the wide variety of user data available on-line. The motive of this review is to understand how large datasets can be made from available twitter data. The tweeting and re tweeting behavior of a user can be user to infer attributes like, gender, age etc. We’ll also try to understand the applications of Machine learning and Artificial Intelligence in this task and how it can be improved for future prospects. We explore in this text how this field can be expanded in future and possible avenues for future research.
The Interdisciplinary Nature Of Knowledge Discovery Databases And Data Mining
T. Bharathi
Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. In the last few years, knowledge discovery and data mining tools have been used mainly in experimental and research environments, business user etc. A large degree of the current interest in KDD is the result of the media interest surrounding successful KDD applications, for example, the focus articles within the last two years in Business Week, Newsweek, Byte, PC Week, and other large-circulation periodicals. Unfortunately, it is not always easy to separate fact from media hype. Nonetheless, several well documented examples of successful systems can rightly be referred to as KDD applications and have been deployed in operational use on large-scale real-world problems in science and in business.
Vidhi Shah, Akshat Shah , Asst. Prof. Khushali Deulkar
Comparative Study Of Semantic Search Engines
Vidhi Shah, Akshat Shah , Asst. Prof. Khushali Deulkar
The amount of information accumulated in the internet is massive. The information is searched in the internet using a search engine. The current search engines searches for the needed information based on the keywords which the users have typed. These search and retrieval are based on syntactic analysis of keyword instead of contextual analysis. In order to overcome this issue, the need of Semantic Search Engines is increasing. The Semantic Web is an extension of the current web in which Information retrieval is based on the contextual analysis of user search query which is a more meaningful search. In this paper we identify the different approaches and techniques used in different search engines as well as the analysis and comparison of various semantic web search engines based on various parameter to find out their features and limitations. Based on these analyses of different search engines, a comparative study is done to identify relative strengths in semantic web search engines.
Cognitive Perspective Of Attribute Hiding Factor Complexity Metric
Francis Thamburaj, A. Aloysius
Information hiding is one of the key features and a powerful mechanism in Object-Oriented programming. It is critical to build large complex software that can be maintained economically and extended with ease. As information hiding improves the software productivity and promotes the software quality, it is essential to measure it. Further, the data or attribute value safety plays the vital role in the reliability of the software, which is the key factor determining the success of software. Data safety can be achieved by hiding the attribute. Hence, it is necessary and vital to measure the attribute hiding factor more accurately. This article introduces a new complexity metric called Cognitive Weighted Attribute Hiding Factor. It is defined and mathematically formulated to yield better results than the original Attribute Hiding Factor complexity metric. It is statistically proved by comparative study. Further, the new complexity metric is tested for empirical validity and applicability with a case study. The results show that the new complexity metric index due to the combination of encapsulation and attribute scoping is better, broader and truer to reality.
Bhakti Mehta ,Varsha Marathe , Priyanka Padvi and Manjusha Shewale
Ontology-Based Multi-Document Summarization
Bhakti Mehta ,Varsha Marathe , Priyanka Padvi and Manjusha Shewale
Ontology is defined as conceptual representation data and relationships between this data.In this paper, we propose to use this ontology for summarization of multiple documents related to a specific domain. We explore various techniques that can be used for summarization. We then focus upon a particular approach for summarization of documents belonging to a particular domain.The domain that we have considered is disaster management.As an example we will be taking the earthquake that took place in Nepal in April 2015.Using this example we will demonstrate summarization of multiple documents.
Lung cancer is that the most significant reason behind cancer death for each man and woman. Early detection is incredibly necessary to reinforce a patient’s likelihood for survival of carcinoma. For early and automatic respiratory organ tumor detection, we have a tendency to purpose a system that relies on textural options. There are 5 main phases concerned within the planned CAD system. They’re image pre-processing, segmentation, feature extraction, classification of carcinoma as benign or malignant. The respiratory organ parenchyma region is segmental as a pre-processing as a result of the tumor resides inside the region. This reduces the search area over that we glance for the tumours, thereby increasing process speed. This additionally reduces the prospect of false identification of tumor. The image pre-processing is done by using fuzzy filter. Segmentation is done by using water shade algorithm, Textural options extracted from the respiratory organ nodules victimisation grey level co-occurrence matrix (GLCM). Then finally for classification, SVM classifier is utilized. This classifier is utilized to classify the nodules as Benign or Malignant.
Cloud computing opens a new stream in IT as it can provide various elastic and scalable IT services in a pay-as-you-go way, where its users can reduce the cost in their own IT infrastructure. In this way, users of cloud storage services do not physically maintain direct control over their data, which makes data security one of the major advantage of using cloud. Previous research work already allows data integrity to be verified without presence of the actual data file. When this verification is done by a trusted third party, the verification process is called as data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several drawbacks .First ,the necessary authorization or authentication process is missing between the auditor and cloud service provider, means anyone can ask for challenge the cloud service provider about proof of integrity of certain file, in which the quality of the so-called `auditing-as-a-service' in risk; Second, although some of the previous work based on BLS signature which can support fully dynamic data updates over fixed-size data blocks, they only support updates with fix-sized blocks as basic unit, which called as coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for whole file block, which in turn causes higher storage and communication overheads. In this paper, we provide an analysis for all possible types of fine-grained data updates and propose a scheme that can provide full support to authorized auditing and fine-grained update requests. Based on our scheme, we also propose the technique that can reduce communication overheads for verifying small updates. Theoretical analysis and experimental results show that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transaction.
The Future Generation Intelligent System Need For Human Memory To Detect Crime Using Virtual Brain Technology.
R.Uma, S.Kavitha, V.Ilakkiya
In this paper we have suggested an idea to detect the crime of the human by using virtual brain technology. Human brain is the most valuable creation of the god in the world but” Virtual brain” or virtual brain that means a machine can act as a human brain, it can think, take decision and respond, a machine can function as a human brain. The unique identity of the human being is their own creative knowledge. After death, it will get destroyed. But we will recreate their knowledge using virtual brain technology. Simply we called it as virtual brain. The Virtual brain is an attempt to reverse engineering the human brain and recreate it at the cellular level inside a computer simulation. After the death of the body, the virtual brain will act as a man. So, even after the death of the person we will not loose the knowledge, intelligence, personalities, feelings and memories of the human.
In this era of Globalization, the extensive use of SmartPhones in our day-to-day life has become a very important factor. According to surveys, nearly 0.8 of the SmartPhone industry is dominated by Android. Therefore, the issue of Security and Privacy of the User becomes a crucial factor to be taken care of. Since, Android is an Open-Source Platform, the scope of resolving issues like this becomes easy and effective with the help of applications designed by programmers around the globe. Smartphones running on Android OS have an in-built password detection mechanism (character/pattern/gesture) which helps in differentiating between real user from unknown user. Also, the proposed system are designed to work with their own perspective to meet the users demands and this done with the help of pre-defined sensors which are thoroughly elaborated after a lot of search. In this paper, A survey of the research done by the predecessors on related subject to provide better security and to overcome this breach of privacy. Now while discussing the security measures, The system can possibly assume that under a certain type of scenario an intruder may try to access valuable information of the user without his knowledge and when faced with a password barrier, the intruder may try to bypass it by entering another password, and if the password is wrong it may result in multiple failed attempts. So there has to be certain solution to detect this breach and if possible overcome it. Along with this, the proposed system should make sure that the application consumes least amount of battery and uses minimum amount of system memory.
Gamma Corrected Bright Channel Filtering Based Fast Single Image De-hazing
Navjot Kaur, Dr. Lalita Bhutani
The proposed research work is based on the concept of visibility enhancement in obtuse weather conditions like dense fog for camera captured images and then the post enhancement process to improve the quality of the data in fog effected images. As the calculation of feature defogging must meet continuous necessity, another technique is proposed, it utilizes the gamma light correction framework of the RGB image frame and joins with light variance model to astutely distinguish foggy scenes in an image. Since the foundations of mechanical work pictures by and large change gradually, median filter is utilized to get frontal affected areas. The transmission of bright channel earlier is redesigned by forefront. At that point every casing is restored straightforwardly as per the more current transmission. The defogging calculation enormously diminishes the running time and hence the processing id also reduced to one third of the base method. It accomplishes the reason for practical application of image defogging. Test results demonstrate that the calculation has a high precision on recognizing foggy scenes.
The credit card has become the most popular mode of payment for both online as well as regular purchase, in cases of fraud associated with it are also rising. Credit card frauds are increasing day by day regardless of various techniques developed for its detection. Fraudsters are so experts that they generate new ways of committing fraudulent transactions each day which demands constant innovation for its detection techniques. Most of the techniques based on Artificial Intelligence, Fuzzy Logic, Neural Network, Logistic Regression, Naïve Bayesian, Machine Learning, Sequence Alignment, Decision tree, Bayesian network, meta learning, Genetic programming etc., these are evolved in detecting various credit card fraudulent transactions. This paper presents a survey of various techniques used in various credit card fraud detection mechanisms.
Robust Construction Of Multiday Itinerary Planning
Ms.P.Jayapriya , Mr.S.Gunasekaran , Ms.S.Nathiya
Itinerary planning is the important process for the travelling agencies which need to be done carefully to satisfy the requirements of the users. Creating an efficient and economic trip plan is the most annoying job for a backpack traveler. Although travel agency can provide some predefined itineraries, they are not tailored for each specific customer. Previous efforts address the problem by providing an automatic itinerary planning service, which organizes the points-of-interests (POIs) into a customized itinerary. Because the search space of all possible itineraries is too costly to fully explore, to simplify the complexity, most work assume that user’s trip is limited to some important POIs and will complete within one day. To address the above limitation, in this paper, we design a more general itinerary planning service, which generates multiday itineraries for the users. In our service, all POIs are considered and ranked based on the users’ preference. The problem of searching the optimal itinerary is a team orienteering problem (TOP), a well-known NP-complete problem. To reduce the processing cost, a two-stage planning scheme is proposed. In its preprocessing stage, single-day itineraries are pre computed via the Map Reduce jobs. In its online stage, an approximate search algorithm is used to combine the single day itineraries. In this way, we transfer the TOP problem with no polynomial approximation into another NP-complete problem (set-packing problem) with good approximate algorithms. Experiments on real data sets show that our approach can generate high-quality itineraries efficiently.
Database has been considered to be very important since a really long time and securing it now is of utmost importance-healthcare systems is very popular now but faces security constraints. The distributed m-healthcare systems allow significant patient treatment for medical consultation by sharing personal health information among healthcare providers. The important challenge is to ensure the security of the patient’s identity as well as the data. Patients can decide which physician can access their information by using threshold predicates. Based on this idea we are devising a new technique where the patient can self-control their data with different ways of security. The directly authorized physicians and the patient can respectively decipher the personal health information and/or verify and update patient’s health information at the click of a button.
Detection of Black Hole Attack in DTN with Authentication
D.Humera, M.VeereshBabu
DTN such as sensor networks with schedule occurring at irregular intervals and the packets transferring will be referred to as store–carry-forward technique. Here the routing is decided with characterized by opportunity. These nodes will be acts like malicious node. Here the malicious behaviour can be occurred due to the attacker/hacker who leads to lose of data and increases remission delay, to overcome this problem we are using the SERVICE PROVIDER term which provides the services to the senders and the receivers with confidentially. By using the Vehicular Algorithm, we can detect Black Hole Attack and we can improve the security and the authentication to our data and it chooses the shortest path to transfer the data from sender to receiver, if the chosen path can have the malicious node then immediately router skip to another path which is nearer to it.
Energy Efficient Data Transfer In Mobile Wireless Sensor Network
T . Mary RincyMol , Mrs.A.SahayaPrincyM.Tech
The proposed cross-layer operation model is implemented to improve the energy consumption and system throughput of IEEE 802.15.4 Mobile Wireless Sensor Network (MWSN). The proposed model integrates four layers in the network operation: 1) application is used for node location; 2) network is designed to construct the route routing; 3) medium access control used for message forwarding; and 4) physical layers. The location of the mobile nodes is embedded in the routing operation after the route discovery process. The location information is then utilized by the MAC layer transmission power control to adjust the transmission range of the node. This is used to minimize the power utilized by the network interface to reduce the energy consumption of the node(s). The model employs a mechanism to minimize the neighbor discovery broadcasts to the active routes only. Reducing control packet broadcasts between the nodes reduces the network’s consumed energy. It also decreases the occupation period of the wireless channel. The model operation leads the network to consume less energy while maintaining the network packet delivery ratio.
Performance Analysis of JPEG2000 for Different Wavelets Transforms in satellite images
P.Krishnamoorthy, V.P.Ajay
Nowadays Onboard Image compression algorithm has its own place in all the satellites sent to space for various applications. The images that are obtained by satellites are of huge volume, there arises the need for storing the images onboard. Onboard Image compression is an efficient technique used to reduce the size of an image so that the amount of data stored in the onboard mass memory can be increased. The usage of image compression systems onboard reduces the power and downlink bandwidth required to transmit the image to the ground station. JPEG2000 is one of the most widely used image compression algorithm for observation satellites, due to its high compression ratio. In this paper the performance of JPEG2000 algorithm is analyzed for various wavelet families. The performance of the algorithm is evaluated using the peak signal to noise ratio (PSNR) of the reconstructed image.
Privacy And Detecting Malicious Packet Dropping In Wireless Ad Hoc Network
M. Sindhuja & Mrs A.Sahaya Princy M.Tech
Packet dropping is common attacks occur in wireless Ad hoc Network. This threat occurs when the data is transmitted from one source to destination. The attack may be classified into two types one is malicious packet dropping. And another one is link error. This can be overcome by the proposed scheme by implementing Homomorphic Linear Authenticator (HLA). It is the public auditing scheme to detect the malicious node in WANET. HLA act like an auditor to detect the packet losing schemes in the network. The main advantage of this scheme will securely transmit the data in WANET. The packet dropping rate is comparable to the channel error rate, conventional algorithms that are based on detecting the packet loss rate cannot achieve satisfactory detection accuracy. To improve the detection accuracy, the correlations between lost packets is identified. HLA based public auditing architecture is developed to verify the truthfulness of the packet loss information reported by nodes. Thus the implementation is useful to avoid packet dropping attack in Wireless Ad hoc Network
Providing Lifetime Optimization and Security in Wireless Sensor Network
Sindhuja.R, Mrs.Vidhya.S
Wireless sensor network is a self-organized wireless network system constituted by numbers of energy-limited micro sensors under the banner of industrial application (IA). In this project, a secure and efficient cost aware securer routing protocol to address two conflicting issues: they are lifetime optimization and security. Through the energy balance control and random walking the conflicting issues are addressed. Then discover the energy consumption, is severely disproportional to the uniform energy deployment for the given network topology, which greatly reduces the lifetime of the sensor networks. To solve this problem an efficient non-uniform energy deployment strategy is used to optimize the lifetime and message delivery ratio under the same energy resource and security requirement. It is also to provide a quantitative security analysis on the proposed routing protocol.
As an emerging technology and business paradigm, Cloud computing platforms provide easy access to a company’s high-performance, computing and storage infrastructure through web services. Cloud computing gives numerous profits to the clients, for example approachability and accessibility. As the information is accessible over the cloud, it might be gained entrance to by diverse clients. There may be delicate information of association. This is the one issue to give access to validated clients only. But the data can be accessed by the owner of the cloud. So to abstain from getting information being gained entrance to by the cloud manager, we will utilize the different encryption algorithms to encrypt the data available over the cloud which in turn will give security to the information and only authenticated users will get access to the secure data over the cloud. Duplication of Data is a technique for reducing the amount of storage space, an organization needs to save its data. The proposed system supports authorized duplicate check in hybrid cloud architecture. Proposed authorized duplicate check scheme acquires minimal overhead compared to normal operations. To reduce the weaknesses of convergent encryption, we are proposing LFSR (Linear Feedback Shift Register) encryption technique. Security analysis determines that this system is secure in terms of the definitions specified in the proposed security model.
The Multiple Travelling Salesman Problem (MTSP) is a generalization of the well-known TSP, where more than one salesman is allowed to be used in the solution. The characteristics of the MTSP seem more appropriate for real life applications. In this paper we study a problem called P-TSP Seasonal Constrained Model. Let there are number of cities ‘n’, the number of salesmen P, the third dimension is season S. Let M be common cities, which is subset of n for the salesman. Each salesman has to start their tour from head quarters that is city 1in first season only and at the end of the season 1 all the salesmen have to meet at a common city which is in ‘M’ . Again they starts their tour from common city in next season and visit some more cities and at the end of season 2 each of them have to meet at another common city which is in ‘M’ like this they travel upto (r-1)th season . Finally in rth season all the salesmen have to reach the head quarter city. All the salesman have to visit ‘n0’ cities other than the cities of ‘M’ that is n0< n-m. The objective is to complete p- tours in ‘r’ seasons with minimum total cost/distance.
M.Parvathi MCA, M.Phil, S.Thabasu Kannan M.TECh, Ph.D, MBA
Dynamic and intelligent classifier for efficient retrieval from web mining
M.Parvathi MCA, M.Phil, S.Thabasu Kannan M.TECh, Ph.D, MBA
In recent years the WWW has turned into one of the most important distribution channels for private, scientific and business information. The reason for this development is relatively low cost of publishing a website and more up-to-date view on a business for millions of users. As a result the WWW has been growing tremendously for the last five years. The Google recently reported that it is currently indexing over 7 billion text documents. The number of registered international top level domains has increased more than 9 times over the last 5 years.
The main aim of this paper is to retrieve the effective and efficient retrieval of required documents from various web pages of various websites. Here the efficient and effectiveness can be measured in terms of relevancy and similarity. For achieving more relevance during retrieving the required documents we can use some KDD techniques to extract specific knowledge from the WWW. For achieving relevancy and similarity, some classification methods have been used. For this purpose we have analyzed various classification methods in data mining to evaluate their own performance. The factors like precision and recall have been used to measure the performance and to calculate the accuracy and the number of documents retrieved during a particular period of time. To identify the level of relevancy and similarity the level of accuracy plays an important role. For this we have defined a threshold for comparison. If the evaluated accuracy is greater than the threshold then the similarity level is high. Otherwise the similarity level will be low. If the number of relevant document retrieved during a particular period of time is large the level of efficiency increases. Otherwise it will be low. For the purpose of testing we have taken 30,000 single HTML documents from 300 web sites. We have taken 4 existing classification techniques to compare the efficiency of newly developed classifier.
Hybrid visual servoing for tracking multiple targets with a swarm of mobile robots
Gossaye Mekonnen Alemu
A new hybrid visual servoing based tracking of multiple targets using a swarm of mobile robots is proposed. This distributed algorithm has position based visual servoing (PBVS) and image based visual servoing (IBVS). In addition, the proposed method consists of two approaches: interaction locally among robots and target tracking. Furthermore, neural network extended Kalman filter (NEKF) is used for reducing noises which is existed during tracking targets. When the targets are slower than the robots, Lyapunov function can be used for showing that the robots asymptotically converge to each vertex of the desired configurations meanwhile tracking the targets. Towards the algorithm practical execution, it is necessary to identify the observation ability of each robot in an efficient and inexpensive way. Infrared proximity sensors and monocular camera are applied to fulfill these requirements. Our simulation results describe the proposed algorithm confirms that the considered distributed tracking multi-targets method applying robots swarm is effective and straightforward to implement.
A Survey of Commonly Used Cryptographic Algorithms in Information Security
Manjula G., and Dr. Mohan H.S.
Progression in computing powers and parallelism technology are creating obstruction for credible security especially in electronic information transactions under cryptosystems. An enormous set of cryptographic schemes persist in which each has its own affirmative and feeble characteristics. Some schemes carry the use of long bits key and some support the use of small key. Practical cryptosystems are either symmetric or asymmetric in nature. With the fast progression of digital data exchange in electronic way, information security is becoming much more important in data storage and transmission. Network Security is the most vital component in information security because it is responsible for securing all information passed through networked computers. Cryptography is the one of the main categories of computer security that converts information from its normal form into an unreadable form by using Encryption and Decryption Techniques. This survey compares trendy encryption techniques for convinced selection of both key and cryptographic scheme.
Study of Classification of Diseases by Genetic Algorithm for Multiclass Support Vector Machine Using Hadoop
Mr.Ankit R. Deshmukh , Prof.S.P.Akarte
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Hadoop and MapReduce are used to handle these large volumes of variable size data. This work focuses on the combining a feature selection technique based on genetic algorithm and support vector machines (SVM) of medical disease classification. SVM is relatively a novel classification technique and has been shown higher performance than traditional learning methods in many applications. The idea is to use GA as an optimizer to find the optimal values of hyper-parameters of SVM and adopt a supervised learning approach to train the SVM model. In this paper we propose a genetic algorithm (GA) based classification method.
HIGHLY SECURE REVERSIBLE WATERMARKING MECHANISM USING REVERSIBLE DE-IDENTIFICATON
Fasiha Shereen, B A Sharath Manohar
In the recent years, despite of massive work made in protecting and securing the image using watermarking schemes, there still exists some flaws. One way to get rid of this is usage of well built reversible watermarking scheme as proposed. This work intends to make the process: reversible ,invisible and highly secure. De-Identification is a process which can be used to ensure privacy by concealing the identity of individuals captured. This additional de-identification process aids in achieving high security. One important challenge is to make the obfuscation process reversible so that the original image can be recovered by persons in possession of the right security credentials. This work presents a novel Reversible De-Identification method that can be used in conjunction with any obfuscation process, here the obfuscation process used is k-same obfuscation process . The residual information needed to reverse the obfuscation process is compressed, authenticated, encrypted and embedded within the obfuscated image using a two-level Reversible Watermarking scheme. The proposed method ensures an overall single-pass embedding capacity of 1.25 bpp, where 99.8% of the images considered required less than 0.8 bpp while none of them required more than 1.1 bpp. Experimental results further demonstrate that the proposed method managed to recover and authenticate all images considered.
Performance Measures for a three-unit compact circuit
Ashish Namdeo ,V.K. Pathak, Anil Tiwari
This paper analyses a three component system with single repair facility. Denoting the failure times of the components as T1 and T2 and the repair time as R, the joint survival function of (T1, T2, R) is assumed to be that of trivariate distribution of Marshall and Olkin. Here, R is an exponential variable with parameter α and T1 and T2 are independent of each other. In this paper use of Laplace-Transform is taken for finding Mean Time Between Failure, Availability and Mean Down Time and table for Reliability measure is shown in the end.
Comparisons Of Restored Blurred-Noisy Dental Images Using Wiener Filter And Lucy-Richardson Technique.
Aanchal Joseph, Mr. Sandeep B. Patil
This paper attempts to compare dental images (grayscale/truecolor) degraded with blur and noise using different de-blurring filters. The image is degraded by the combination of Gaussian/Average blur and Salt & Pepper /Speckle / Poisson / Gaussian noise for different values of PSF. All the degraded images are then restored using Wiener Filter, Lucy Richardson Filter technique. Restoration by Wiener filter takes place for different values of estimated NSR and restoration by Lucy-Richardson for different iterations. The restored images are then compared on the basis of SNR, MSE and PSNR values.
This comparison are made to find which filter technique removes which combination of blur and noise on what value of PSF with high PSNR/SNR and low MSE value.