The fields of computer science and electronics have merged to result into one of the most notable technological advances. Contemporary service innovation is to an important extent stimulated and enabled by developments of information technology. A fairly recent technical development is labelled Internet of Things (IoT) based on the fact that also devices and objects are connected to the Internet. IoT also means that individual objects, and interrelated collections of objects e.g. in homes and cars, can be made uniquely identifiable by radio tags, sensors and actuators, and thereby become virtually represented in wireless and wired internet structures in the form of realization of the Internet of Things (IoT).The IoT has the potential to deliver solutions that dramatically improve energy efficiency, security, health, education and many other aspects of daily life. This paper discusses the vision, the challenges, possible usage scenarios and technological building blocks of the “Internet of Things” and its various applications . This paper also discusses how the IoT is shaping the future of development and its challenges .
A Review On Efficient Privacy Preserving And Secure Data Integrity Protection In Regenerating Coding Based Multi-Cloud Storage
Supriya Sahare, Prof. Fazeel I. Z. Qur
Using Cloud Storage, users can remotely store their data and enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources, without the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the outsourced data makes the data integrity protection in Cloud Computing a formidable task. Moreover, users should be able to just use the cloud storage as if it is local, without worrying about the need to verify its integrity. Thus, enabling public auditability for cloud storage is of critical importance so that users can resort to a third party auditor (TPA) to check the integrity of outsourced data and be worry-free. In this paper, we propose a secure cloud storage system supporting privacy-preserving public auditing. We further extend our result to enable the TPA to perform audits in multi-cloud storage efficiently.
I2mapreduce: Fine-Grain Incremental Processing In Big Data Mining
Miss Mugdha A. Kulkarni , Prof I.R.Shai
I2MAPREDUCE: Fine-Grain Incremental Processing in big data mining a novel incremental processing extension to Map Reduce, the most widely used framework for mining big data. As compare with the high-tech work on In coop, I2MapReduce has its own advantages (i) It prefers key-value pair level incremental processing to perform instead of task level re-computation, (ii) It supports one-step computation along with more sophisticated iterative computation, which is extensively used in data mining applications, and (iii) It reduce I/O overhead for accessing preserved fine-grain computation states by incorporating the set of novel techniques. I evaluate I2MAPREDUCE: Fine-Grain Incremental processing in big data mining using a one-step algorithm and four iterative algorithms with assorted computation characteristics. Experimental results on Amazon EC2 show significant performance improvements of I2MapReduce compared to both plain and iterative MapReduce performing re-computation.
Enhanced Security In Data Transmission In Wireless Sensor Network Using Randomized Path
M Uma, M Uma, Dr. A Senthil Kumar
The one of the most important application for gathering specific information from the surrounding environment is WSN, so it is important to safeguard the sensitive data from unauthorized access. The security level in the WSN against the attacks in susceptible due to broadcast. There are two key attacks in WSN such as compromised node and denial of service.The key attack Compromised node(CN) attack which has the ability to create black hole, thereby interrupting the active information delivery and denial of service attack in attempt to make a network resource unavailable to its intended users, such as to temporarily or indefinitely interrupt or suspend services. In this paper we advance the mechanism that generate randomized multipath routes. In this mechanism that generate randomized multipath routes that means shares of different packets change over time. Incase the adversary came to know the routing algorithm,but it can’t identify the routes traversed by each packet more over randomness, the generated routes are also highly dispersive and energy efficient making them quite capable of bypass the black holes.
Analysis Of Genetic Crossover Techniques Based On Roulette Wheel Selection Algorithm And Steady State Selection Algorithm
G.S.Geethamani¹, Dr M.Mayilvaganan
The basic idea behind this proposed method is to analyze the genetic cross over techniques using roulette wheel selection and steady state selection algorithm.
The proposed algorithm is applied to find the genetic operators which are termed as mutation, crossover and selection in large dataset. The proposed technique is very useful to analyse the impact of genetic crossover techniques in random population of chromosomes.
After estimating the genetic crossover technique, the efficiency of the roulette wheel and steady state selection algorithm are estimated. After estimating the efficiency of both algorithms, there is a need to compare the efficiency of roulette wheel and steady state selection algorithm based on the initial population. The extracted rules and analyzed results are graphically demonstrated. The performance is analyzed based on the different number of instances in large data set.
SURVEY ON METEOROLOGICAL WEATHER ANALYSIS BASED ON NAÏVE BAYES CLASSIFICATION ALGORITHM
P.Vanitha¹, Dr M.Mayilvaganan
The aim of this research work, to focuses meteorological data to predict the monsoon seasons to separate the weather data based on Longitudinal and Latitudinal which can be used to analyse the reliability factor of Temperature, Humidity, Rainfall and Cyclone.The present paper analyses the monthly weather data and rainfall data from the Indian monsoon months between the year 2000 to 2014.To find the relationship between the rainfall details, humidity, temperature, After finding the reliability, Naïve Bayes Classification can be predict the Meteorological condition of region based on the cyclone and humidity level in seasonally. It provides specific services to assessment of pollution impacts from different companies and thermal power plants. The environment correlations play a significant role in determining the climate trends which are crucial in understanding the short and long-term trends in climate.
Video streaming is a household term now-a-days and it is widely gaining a lot of popularity among mobile users. A wide variety of mobile devices,such as smart phones and tablets, are equipped with multiple wireless network interfaces. A lot of videos are streamed over the Internet according to users preferences but the question is how to efficiently and cost-effectively improve video streaming quality. In order to maintain high video streaming quality while reducing the wireless service cost,various approaches such as improving band with using adaptive algorithms are devised. In it,the optimal video streaming process with WIFI is done using bandwidth estimation and manipulation.Existing systems consider the quality of service (QoS) requirements for video traffic, such as the start up latency, playback fluency, average playback quality, playback smoothness and wireless service cost. Existing systems based on different survey’s include various bandwidth estimation tools such as Spruce,Pathload,PathChirp etc.These bandwidth estimation tools are based upon scenarios of probe gap model and probe rate model.these tools help to determine the available bandwidth based on the transmission of packets between sender and receiver the streaming quality is deduced and then the video streaming quality is adjusted according to that bandwidth as per user’s choice. The rate adaptation decision is made at the client side. For each segment, the client can request the appropriate quality version based on its screen resolution, current available bandwidth, and buffer occupancy status. We can let the client request different parts of one segment over different links. The main contributions is based upon threefolds. First, formulate the video streaming process over multiple links as an MDP problem. To achieve smooth and high quality video streaming, we define several actions and reward functions for each state, thus calculating the estimated bandwidth. Second, to propose an algorithm to perform bandwidth manipulation, this will take several future steps into consideration to avoid playback interruption and achieve better smoothness and quality. Last, we implement a realistic test bed using an Android phone and Scalable Video Coding (SVC) encoded videos to evaluate the performance. 
There have been recent interests in studying the goal behind a user's Web query so that this goal can be used to improve the quality of a search engine's results. The inference and analysis of user search goals can be very useful in improving search engine relevance and user experience. In this paper, main focus is on the survey of infer user search goal approaches in previous study. Additionally this paper projected a framework to search various user search areas for a query by clustering the feedback sessions. Feedback sessions are constructed from user click-through logs and can efficiently reflect the information needs of users. Additionally, a novel technique is projected to generate pseudo-documents to better represent the feedback sessions for clustering.
A Survey On Outlier Detection Technique In Streaming Data Using Data Clustering Approach
Mr. Mukesh K. Deshmukh Prof. A. S. Kapse
Data mining is a highly researched area in the today’s world as data is crucial part of many application, due to which many researchers express their interest in this domain. As there arises a need to process large dataset which imposes different challenges for researchers. To have a data which is free from a noisy attributes , known as a filtered data , is of much important to gain accuracy in a result sets. For that , finding and eliminate the noisy objects has gained a much more importance. An object that does not follow the footprints of usual data object is called outliers. Outlier detection process is used in numerous applications like fraud detection, intrusion detection system, tracking environmental activities, healthcare diagnosis. Numbers of approaches are used in the process of detection of outlier. Most approaches focuses to use Cluster-based and Distance based approach (i.e. using K- Means algorithm and Euclidian distance) for outlier detection in data sets which help them to create a group of similar elements or cluster of data points. Clustering techniques are highly useful for grouping similar data items from data sets and after that by applying distance based calculations, detection of outlier is done, so they are called cluster-based outlier detection. K- Means and Euclidian distance are the most common and popular algorithm for clustering and outlier detection process due to its simplicity and efficiency. Different application areas of outlier detection are discussed in this paper.
Evaluate The Asphalt Pavement Performance Of Rut Depth Based On Intelligent Method
Raed Ibraheem Hamed , Zana Azeez Kakara
The development of highway at Kurdistan in Iraq, requirement of road users is increasing. The key problem is to design the asphalt pavement construction to ensure pavement performance. Evaluation of asphalt pavement performance is very important method it concerned with quantitative and uncertain information. Fuzzy Petri net (FPN), as one type of high level Petri net, has shown a lot of attention recently due to its sufficiency, for knowledge representation and logic reasoning. In this paper we present an intuitive approach of FPN model for asphalt pavement performance evaluation. The model is based on the theory of FPNs is used to describe a formal model of knowledge representation for vague information of asphalt pavement (i.e. rut depth).The proposed approach is to bring asphalt pavement performance evaluation within the powerful modeling method of FPN tools. The input values in our model the surface thickness (ST), traffic count (TC) and age (A) can be formulated as uncertain fuzzy tokens to determine the rut depth values at time instance (t+1). The FPN components and functions consist of some types of fuzzy operators AND operator (MIN) and OR operator (MAX) of If-parts and Then-parts in fuzzy rules.
Aman kumar, K Ravi kiran, Dr ASrinivasula Reddy M. Sarojini
Design And Analysis Of High Speed, Low Power And Area Efficientdct Architecture For Multimedia Applicationsimplemented Oncadence 180nm
Aman kumar, K Ravi kiran, Dr ASrinivasula Reddy M. Sarojini
In this paper proposed power and area efficient discrete cosine transform (DCT) architecture for multimedia applications. This paper Implemented conventional DCT and multiplier less DCT‟s by less number of adders/subtracter and multipliers. Area and power achieved by reducing mathematical operations. Number of cells, cell area, internal power, net power, leakage power, switching power reduced compared to conventional DCT. Power delay product of both conventional DCT and multiplier less DCT‟s are 19.8mJ, 19.7mJ and 10.8 mJ respectively. The proposed DCT and conventional DCT are implemented on cadence RTL compiler 180nm.
Active database systems support mechanisms that enable them to respond automatically to events that are taking place either inside or outside the database system itself. Considerable effort has been directed towards improving understanding of such systems in recent years, and many different proposals have been made and applications suggested. This high level of activity has not yielded a single agreed-upon standard approach to the integration of active functionality with conventional database systems, but has led to improved understanding of active behaviour description languages, execution models, and architectures. This paper presents the fundamental characteristics of active database systems, describes a collection of representative systems within a common framework, considers the consequences for implementations of certain design decisions, and discusses tools for developing active applications. Active database management systems are invoked by synchronous events generated by user or application programs as well as external asynchronous data change events such as a change in sensor value or time. In this paper gives the introduction of active DBMS and discussed how it is different from passive DBMS.
Dr. A. Sudhakaraiah*, B. Radhamma, K. Ramakrishna2 ,M.Reddappa
To Find the Comparison of Non-Split Domination Number, The Average Distance and The Diameter of Circular-Arc Graphs
Dr. A. Sudhakaraiah*, B. Radhamma, K. Ramakrishna2 ,M.Reddappa
A connected dominating set is used as a backbone for communications and vertices that are not in this set communicate by passing message through neighbors that are in the set. Among the various applications of the theory of domination and the distance, the most often discussed is a communication network. This network consists of communication links all distance between affixed set of sites. Circular-arc graphs are rich in combinatorial structures and have found applications in several disciplines such as Biology, Ecology, Psychology, Traffic control, Genetics, Computer sciences and particularly useful in cyclic scheduling and computer storage allocation problems etc. Suppose communication network does not work due to link failure. Then the problem is what is the fewest number of communication links such that at least one additional transmitter would be required in order that communication with all sites as possible. This leads to the introducing of the concept of the non-split domination number, average distance and diameter. In this paper we present the comparison of non-split domination number, the average distance and the diameter of circular-arc graphs.
The advances in image acquisition technique made recording images never easier and brings a great convenience to our daily life. It raises at the same time the issue of privacy protection in the photographs. One particular problem addressed in this paper is about covert photographs, which are taken secretly and often violate the subject willingness. Covert photos are often privacy invasive and, if distributed over Internet, can cause serious consequences. Automatic identification of such photos, therefore, serves as an important initial step toward further privacy protection operations. The problem is, however, very challenging due to the large semantic similarity between covert and noncovert photos, the enormous diversity in the photographing process and environment of cover photos, and the difficulty to collect an effective data set for the study. To overcome these challenges three contributions are used. First is to consider a dataset of 2500 covert images and verify each image carefully. Second is to conduct user study on how human perform to distinguish between covert and non-covert images. Third is to perform covert photo classification algorithm that fuses various image features and visual attributes in the multiple kernel learning framework.
Emerging Technologies For Big Data Processing: NOSQL And NEWSQL Data Stores
Deepika Aggarwal, Roopam, Sonika
In this incessant science and technological era, where advances in web technology and the production of mobile devices and sensors connected to the Internet are resulting to voluminous amount of structured, semi-structured and unstructured data, called Big Data, the demand for technologies with extensive processing and storage requirements is rising to persuasively process such data i.e. Big Data. Traditional relational databases are facing challenges in meeting the performance and scale requirements of Big data. To meet these requirements enterprises are adopting diversified technologies like NoSQL and NewSQL which are emerging as alternative to relational database technologies for the various interrelated megatrends like Big Data and Cloud Computing. This paper discusses the prominent features of NoSQL and NewSQL data stores in the context of cloud computing.
Sequential Implementation of Web Content Mining Using Data Analysis in Online Sales Domain
Dr.S.P.Victor*, Mr. M. Xavier Rex
In the web data mining retrieving and storage analysis of web content plays a vital role in online sales domain which provides the systematic way of novel implementation towards real-time data with different level of implications. Our experimental setup initially focuses with retrieval of web content. This paper perform a detailed study of web content retrieval schema towards variant effect of periodic web page content in the field of online sales marketing domain which can be carried out with expected optimal output strategies. We will implement our experimental image restoration techniques with real time implementation of object representation in the motive of commercial household product Domains such as an Online marketing required for an open data analysis system. We will also perform algorithmic procedural strategies for the successful implementation of our proposed research technique in several sampling domains with a maximum level of improvements. In near future we will implement the Optimal multiple product poly comparison techniques for the pricing structure of online sales domain.
Modular Implementation of Neural network Structures in Marketing Domain Using Data Mining Technique
The process of implementing neural networks comprises several strategies in the field of data mining. The proper application of supervised and unsupervised learning concepts play a vital role in the successful implementation of Neural network methods which are not easily used for data mining tasks. Since they often produce incomprehensible models and require long training times. In this paper we implement neural network learning algorithms that are able to produce comprehensible models and that do not require excessive training times. The real time implementation of marketing domain is taken into account for the incorporation of data mining rule extraction approach which involves extracting symbolic models from trained neural networks. In near future we will implement the descriptive and predictive approaches of data mining implementation in real time using neural network conceptual schema.
Comparison and Analysis of Document Stored Databases
Shriram Sharma, Atul Chaudhary
NoSQL (Not Only SQL) technology includes broad variety of different databases technologies that were developed in response to storage of large volume of user data, handling high access frequency, performance of system and processing of the data. Relational databases were not designed to deal with scalable modern real time applications and agility challenges faced by these applications. RDBMS are used in many applications for long time, the data is stored in tabular form and it is stored in meaningful way, but now there is need to store and manage large amount of data which cannot be handled by traditional relational databases. NoSQL technology is used to overcome this feature of the traditional databases by providing efficient way of storing and managing various types of data with huge amount of dataset. In this report performance analysis is done on document oriented databases: MongoDB, CouchDB and Cassandra. Document oriented database is category of the NoSQL databases where the data is stored in JSON like files. This makes the database capable of storing huge amount of data anywhere in the disk.
Off-Line English Character Recognition with Geometric Discretization
Bayan Omar Muhammed , Zana Azeez Kakara
Recognition rate of handwritten English character is still limited due to presence of large variation of shape, scale and format in hand written characters. The thing that's very difficult to deal with in character recognition is that the handwriting of a person differs from one person to another and considering the human error it is impossible for one person to write the same thing over and over again where it has to be the exact writing. However this is considered individualistic where the consistent individual features are hidden in the character handwriting. For that reason this study going to focus on the Off-Line English language characters in order to extract geometric moment’s features for the Characters shape of the Handwriting recognition. The geometric moment’s features are being completed thoroughly also the presence of solo features will be legitimized by checking and investigating it granularly; therefore the idea of applying the Invariant Discretization. By injecting the solo performance to the system through the injection of different issues for the solo feature into individual feature or standard performance this is being accomplished by the support of Invariant Discretization. Where the advantage of the Invariant Discretization to reduce the similarity error for intra-class (of the same character), with the increase of the similarity error for inter-class (of different characters)in recognition of Off-Line handwritten English characters with Fuzzy logic .
An efficient “3 phase 4 channel AC energy meter” with automatic load control and automated billing using GSM
Nikhilesh Chauhan , Prof. Mahesh Navale
Energy meter reading tedious process. Now a day energy meter reader goes to every premise and takes the reading manually then issues the bill. In manually reading human error possible and not provide reliable meter reading. To avoid this difficult task Automatic Energy Meter Reading (AMR) system is introduced. AMR is the technology that automatically collecting consumption and status of data from energy metering device and transferring the data to Electricity Board (EB) office by using Wireless Sensor Networks (WSN). After verifying customer‟s serial number bill will be issued then data has been stored into database. The proposed system automatically disconnect meter either load crosses concern limit or payment periods exists. It also does not provide electricity tampering and provide accurate meter reading.