Nowadays for the analysis of ECG signals and its interpretation, Signal processing plays an important role. ECG signal processing has an important objective to give society a filtered result with maximum accuracy and the information which is not readily extracted from visual assessment of ECG signal. ECG signals are obtained by placing electrodes on the body surface of a human being. It leads to contamination of noise to ECG signals. These noises are baseline wander, power-line interference, electromyographic (EMG) noise, electrode motion artifacts and much more. These noises act as hurdles during processing of ECG signal and thus for removal and rejection of such noise, pre-processing of ECG signal is an important task. Therefore on a primary basis, filtering techniques are used for preprocessing of any signals and similarly for ECG signals. The only care for ECG signal should be taken that the real information should not be distorted. In this paper, the main concentration will be on filtering of the baseline wander and the power-line interference.
Verification of person by means of Biometrics can be uniquely identified by one or more biological traits of that person. These unique identifiers include eye pattern, hand pattern, fingerprints, earlobe geometry, retinas, DNA, voice waves and signatures. Recent research in biometrics field has done into finding more ways to identify someone through different gaits proposed. Among many, earlobe biometrics is the stable one considered in the light of aging of Human bodies. As others earlobes geometry does not change with time and emotions. Geometric features considered in earlobe biometrics are ears height, corresponding angles, and inner ears curve and reference line cut points. Random orientation is performed and it shows greater accuracy than previous model. The recognition accuracy is increased by training images in databases. This class of passive identification is ideal for biometrics because the features are robust and can be reliably extracted from distance.
Sonawane Gokul R.1 Gaikwad Vikas R Vaidya Nikhil R.3 Sudake Shyam S.4 Prof. Rokade P.P
Multitask TSK Fuzzy System Modeling by Mining Intertask Common Hidden Structure
Sonawane Gokul R.1 Gaikwad Vikas R Vaidya Nikhil R.3 Sudake Shyam S.4 Prof. Rokade P.P
The classical fuzzy system model method kindly assumed data which is generated from single task. This data can be acquired from the perspective of multiple task the modeling has on intrinsic inconsistency. In this project , a multiple fuzzy system modeling method by mining interact common hidden structure is propose to overcome the weakness of classical TSK- based fuzzy modeling method system for multitask learning. When the classical fuzzy modeling method are applied to multitask datasets, they usually focus on the task independence information and the ignore the correlation between different task. Here we mine the common hidden structure among multiple tasks to realize multitask TSK fuzzy system learning it makes good used of the independence information of each task and correlation information captured by common hidden structure among all tasks as well. Thus, the proposed learning algorithm can effectively improve both the generalization and fitting performance of the learned fuzzy system for each task . Our experiment result demonstrate at the proposed MTCS-TSK_FS has better modeling performance and adaptability than the existing TSK based fuzzy modeling method on multitask datasets. Learning multiple tasks across different datasets is a challenging problem since the feature space may not be the same for different tasks. The data can be any type or the datasets. The data can be any type or the datasets are any type like text datasets.
In the modern world, growth is a factor every company seeks for, And Growth is determined by the amount of revenue they generate, the number of consumers they satisfy etc. For product based companies, achieving Greater number of sales is their key objective. And it achieved by understanding customers’ requirements, employing various marketing strategy, carrying out relevant analysis, advertisement etc., and all of these factors pose a great challenge for every enterprise and companies around the world, but foremost understanding the customers interest is the greatest challenge of all. Every time we step into a mall or Exhibition, It is the natural tendency of a person to spend more time with the object that meets their interest, but till now we have never kept the track of the interaction between the consumers and product with respect to time. In our project, with the help of wireless communication & ubiquitous sensors such as RFID and proper GUI, we are developing a consumer interest tracking device which is capable of gathering valuable information regarding time spent by an individual at various stores, Products in an exhibition or shopping mall. Based on the information collected we determine the interest of consumer, which in turn help the company to manufacture better products, take smarter decisions and ensure a safer future for the enterprise. The information gathered is made available for real-time monitoring, or can be stored for future analysis
The research work on this paper aims to develop anunmanned aerial vehicle equipped with modern technologiesvarious civil military applications. It is an automatic system The shrinking size and increasing capabilities of microelectr-onicdevices in recent years has opened up the doors to more capable autopilot and pushed for more real time UAVs applications.The Unmanned Aerial Vehicle (UAV) market is to grow dramatically by 2020, as military, civil and comercial applications continue to develop. Potential changes in air traffic management include the creation of an information It defines a UAV to be ”An aircraft which is management system to exchange information among AirTraffic Management users and providers, the introduction of navigation, and the development of alternative separation procedures. The impact of each scenario on the future air traffic and surveillance is summarized, and associated issues identified. The paper concludes by describing the need for a UAV roadmap to the future. This paper aims to provide a simple and low-cost solution of an autonomous aerial surveyor which can do aerial surveillance ,recognize and track various objects, able in making simple 3d map .
Comparison and Performance Analysis of Various FACTS Devices in Power System
Mit fadadu , Jugal lotiya
In recent years, power demand has increased substantially while the expansion of power generation and transmission has been severely limited due to limited resources and environmental restrictions. This paper investigates the enhancement in voltage stability margin as well as the improvement in power transfer capability of transmission line in a power system with the incorporation of Static Synchronous Compensator (STATCOM), Fixed Capacitor Thyristor Controlled Reactor (FC-TCR and) Static Synchronous Series Compensator (SSSC). A simple transmission line system is modelled in MATLAB/SIMULINK environment. The load flow results are first obtained for an uncompensated system, and the voltage and power profiles are studied. The results so obtained are compared with the results obtained after compensating the system using STATCOM, FC-TCR and SSSC to show the voltage stability margin enhancement. The results obtained after simulation demonstrate the performance of the system for each of the FACTS devices in improving the power profile and thereby voltage stability of the same.
A review on grid power quality improvement in wind energy system using STATCOM with BESS
Mr. Nirav Paija1, Prof. Sweta Shah 2
The sources such as wind power and solar power are expected to be promising energy sources when it is connected to the power grid. The wind generators have a significant impact on the power quality, voltage profile and the power flow for customers and electricity suppliers. The power exhausted from above energy sources varies due to environmental conditions. Due to the fluctuation in nature of the wind, the wind power injection into an electric grid affects the power quality. The influence of the wind sources in the grid system concerns the power quality such as the reactive power, active power, voltage variation, harmonics and electrical behaviour in switching operation[1]. Demonstration of a grid side connected wind turbine is considered here with the problem arise due to the above system. At the point of common coupling a Static Synchronous Compensator with Battery Energy Storage System-STATCOM/BESS, can regulate four-quadrant active and reactive power, which is an ideal scheme to solve problems of wind power generation. As the power from wind generation varies with time so the battery energy storage used to maintain constant real power comprehensively from varying wind power. The power generated through wind generator can be stored in the batteries at low power demand hours[2-4]. The combination of battery storage with wind energy generation system will synthesize the output waveform by absorbing or injecting reactive power and enable the real power flow required by the load. The control strategy can coordinate charge or discharge of batteries with reactive power compensation of STATCOM, and balance the batteries capacity. If required, amount of energy consumed or given to the grid can be observed through an online smart meter connected in the circuit.
The main objective is to preserve data privacy during communication. In this paper, we show how external aggregators or multiple parties use algebraic statistics over their private data without exploiting data privacy assuming all channels and communications are open to eavesdropping attacks. Firstly, we propose many protocols that guarantee data privacy. Later we propose advanced protocols that tolerate maximum of k passive adversaries who do not try to modify the computation.
Concurrent Data Compression for Cost Effective Applications
M. S. Narayanan .
This work presents a novel idea of compression of multi- parameter data encountered in various fields. The advantages of a time domain approach are emphasised. A general mathematical model is proposed that can be adopted for any 3D object. The ease with which the method can be adopted for on line data capture in remote and rural areas is the key aspect of this work.
Intelligent Techniques with GUI by Challenge Keypad for Secure Password
Mr. Krishna S. Gaikwad, Prof. Amruta Amu
In general, all the keypad based authentication system having several possibilities of password guessing by means of shoulder movements. Shoulder-surfing is an attack on password authentication that has traditionally been hard to defeat. This problem has come up with a new solution. Devising a user authentication scheme based on personal identification numbers (PINs) that is both secure and practically usable is a challenging problem. The greatest difficulty lies with the susceptibility of the PIN entry process to direct observational attacks, such as human shoulder-surfing and camera-based recording. PIN entry mechanism is widely used for authenticating a user. It is a popular scheme because it nicely balances the usability and security aspects of a system. However, if this scheme is to be used in a public system then the scheme may suffer from shoulder surfing attack. In this attack, an unauthorized user can fully or partially observe the login session. Even the activities of the login session can be recorded which the attacker can use it later to get the actual PIN. In this paper, we propose an intelligent user interface, known as Color Pass to resist the shoulder surfing attack so that any genuine user can enter the session PIN without disclosing the actual PIN. The Color Pass is based on a partially observable attacker model. The experimental analysis shows that the Color Pass interface is safe and easy to use even for novice users.
The agile methods, such as Scrum and Extreme Programming (XP), have been a topic of much discussion in the software community over the last few years. While these have gained importance in the industry because of their approach on the issues of human agility and return on investment, usually within a context of small-to-medium size projects with significant requirements volatility, the ones who do not support these methods have expressed serious concerns about the effectiveness of the methods. Scrum attempts to build the work in short iterations where each iteration consists of short time boxes. This paper reviews several papers on Scrum, its framework including its artifacts and the ceremonies which are involved. This paper gives an insight of the Scrum Methodology to any beginner
Bug Triage Using Dimensionality Reduction Technique And PSO Algorithm
S.Amritha1, A.Jennifer Sagaya Rani2
The process of fixing bug is bug triage that aims to properly assign a developer to a new bug. Software companies pay out most of their expenses in dealing with these bugs. To reduce time and cost of bug triaging, an automated approach is developed to predict a developer with relevant experience to solve the new coming report. In proposed approach data reduction is done on bug data set which will reduce the scale of the data as well as increase the quality of the data. Instance selection and feature selection is also used simultaneously with historical bug data. Previously, text classification techniques are applied to conduct bug triage. The problem here is to get quality bug data sets as they are of very huge in size. In the proposed system, the problems of reducing the size and to improve the quality of bug data are addressed.First, pre-processing is done to the remove unimportant attributes and to identify missing terms. Then instance selection is combined with feature selection by using Dimensionality reduction technique to simultaneously reduce data size on the bug dimension and the word dimension. By using PSO algorithm, the reduction order is determined using fitness value. It is used to produce quality bug data set. The results show that the proposed system can effectively reduce the data size and improve the accuracy of bug triage. The proposed system provides an approach to leveraging techniques on data processing to form reduced and high eminence bug data in software improvement and maintenance.
V. P. Patil1, Ketan Malle2 Juned Maniyar3 , Santosh Bikkad4 Dhananjay Manatkar5
RFID Based Ration Card Using Autmated Load Cell
V. P. Patil1, Ketan Malle2 Juned Maniyar3 , Santosh Bikkad4 Dhananjay Manatkar5
Ration distribution is one of the big issues that involves corruption and smuggling of goods. The only reason of this to happen is because every work in the ration shop involves manual work. These irregularities activities are happen like – the entries which are incorrect in collection record of store containing wrong stock information of the products that is supplied to the peoples, at times it may chance of distribution of minimum quality products than the actual products given by the Government for providing to the public, also the information regarding the available stock quantity in a ration shop that is supplying by the Government to the public. In this paper we propose the concept of changing manual work in public distribution system by automated system which can be installed at the ration shop with ease. This would bring the transparency in rationing system as there will be a direct communication between user and Government through this system.
A New Era to improve the QOS on Cloud using hybrid approach
Prabhjot Kaur 1, Talwinder Kaur2
Cloud computing is an inevitable progression in the future computing development of technology. Its critical significancelies in its ability to provide all users with tremendous performance and reliable calculation. With the evolution of system virtualization and Internet technologies, Cloud computing has emerged as a new computing platform. Cloud computing is to contribute virtualized IT resources as cloud services by using the Internet technology. In Cloud computing, a cloud user carry out an agreement called Service Level Agreement (SLA) with a cloud provider. The cloud user makes use of IT resources like storage and server as a service and pays for the service. In cloud computing environment, there are inevitably numerous service providers to provide services with similar functionalities and different QoS. These services can incorporate tens of thousands composite services with similar functions and different QoS. That is, there are many distinct combination plans. Therefore, in a service composition process, we need to choose service components from enormous services with similar functions and different QoS based on user's QoS requirements. In the research work, we used populations with different sizes, with adopted for different composition scales, the efficiency of algorithm has been greatly improved. Therefore, the research work concentrated on examining the dynamic adaptive approach of population size. The other next step is to apply the proposed hybrid algorithm into a number of functional large-scale services of computing environments, in order to enhance the efficient and reliable operations of the hybrid GA further. To analyzed the behavior of the proposed method using various research parameters such as Input number of tasks, Populations size( different), Average number of candidate services for each task, Compute average fitness value of simple genetic.
Pomegranate Leaf Disease Detection Using Support Vector Machine
Shivaputra S.Panchal , Rutuja Sonar
Identification of the plant diseases is the key to preventing the losses in the yield and quantity of the agricultural product. The studies of the pomegranate plant diseases mean the studies of visually observable patterns seen on the plant. It is very difficult to monitor the pomegranate plant diseases manually. Hence, image processing is used for the detection of pomegranate plant diseases. Disease detection involves the steps like image acquisition, image pre-processing, image segmentation, statistical feature extraction and classification. K-means clustering algorithm is used for segmentation and support vector machine is used for classification of disease.
Soil Classification and Suitable Crop Yield Prediction Using Support Vector Machine
Shivaputra S.Panchal , Laxmi Sharma
Identification of the soil is the key to avoid the losses in the quantity of the agricultural product and yield. The studies of the soil mean the studies of outwardly recognizable patterns seen on the soil. Soil grouping is extremely basic for feasible agribusiness. It is exceptionally hard monitor manually. It requires tremendous amount of work, and additionally require the inordinate processing time.
Henceforth, image processing is utilized for the identification of soil. Soil recognition includes the steps like image acquisition, image pre-processing, feature extraction and classification. This project describes a Support Vector Machine based classification of soil samples
Protection Preserving Public Supportable System In Cluster Based Distributed Storage
Nivethaa Varshinie.R , Rajarajan.A
— to secure outsourced information in distributed storage against defilements, adding adaptation to internal failure to distributed storage together with information uprightness checking and disappointment reparation gets to be basic. As of late, recovering codes have picked up ubiquity because of their lower repair transfer speed while giving adaptation to non-critical failure. Existing remote checking systems for recovering coded information just give private examining, requiring information proprietors to dependably stay online and handle reviewing, and additionally repairing, which is some of the time unreasonable and also all the distributed data are stored in same functional location ,so search and data retrieval takes much time. This time delay will affect the distributed storage efficiency. In this project an open examining plan for the recovering code based distributed storage is proposed and also Attribute Based Clustering Technique (ABCT) For Distributed Data. The ABCT will recover the issue of time delayness and makes the system more efficient. We randomize the encode coefficients with a pseudorandom capacity to protect information security. The ABCT achieves the much faster performance data searching and retrieval. Broad security examination demonstrates that our plan is provable secure under arbitrary prophet model and trial assessment shows that our plan is exceptionally productive and can be attainably coordinated into the recovering code based distributed storage.
— Mining the needed data based on our application was the crucial activity in the computerized environment.For that mining techniques was introduced.This project used to extract the mobile apps.The Ranking fraud in the mobile App market refers to fraudulent or deceptive activities which have a purpose of bumping up the Apps in the popularity list. Indeed, it becomes more and more frequent for App developers to use shady means, such as inflating their Apps’ sales or posting phony App ratings, to commit ranking fraud. Here first propose to accurately locate the ranking fraud by mining the active periods, namely leading sessions, of mobile Apps. Furthermore, we investigate three types of evidences, i.e., ranking based evidences, rating based evidences and review based evidences, by modeling Apps’ ranking, rating and review behaviors through statistical mining based hypotheses tests. In addition, In this project an optimization based application used to integrate all the evidences for fraud detection based on EIRQ (efficient information retrieval for ranked query) algorithm. Finally, evaluate the proposed system with real-world App data collected from the iOS App Store for a long time period. Experiment was need to be done for validate the effectiveness of the proposed system, and show the scalability of the detection algorithm as well as some regularity of ranking fraud activities.
An Optimization And Security Of Data Replication In Cloud Using Advanced Encryption Algorithm
S.Suganya , R.Kalaiselvan
Cloud computing is an emerging pattern that provides computing, communication and storage resources as a service over a network. In existing system, data outsourced in a cloud is unsafe due to the eaves dropping and hacking process. And it allows minimizing the security network delays in cloud computing. In this paper to study data replication in cloud computing data centers. Unlike another approaches available in the literature, consider both security and privacy preserving in the cloud computing. To overcome the above problem we use DROPS methodology. The data encrypted using AES (Advanced Encryption Standard Algorithm). In this process, the common data are divided into multiple nodes also replicate the fragmented data over the cloud nodes. Each data is stored in a different node in fragments individual locations. We ensure a controlled replication of the file fragments, here each of the fragments is replicated only once for the purpose of improved security. The results of the simulations revealed that the simultaneous focus on the security and performance, resulted in improved security level of data accompanied by a slight performance drop.
Bandwidth Optimization In Data Retrieval From Cloud Using Continuous Hive Language
S.Surendran, K.Prema
In distributed applications data centers process high volume of data in order to process user request. Using SQL analyzer to process user queries is centralized and it is difficult to manage large data sets. Retrieving data from the storage is also difficult. Finally we can’t execute the system in a parallel fashion by distributing data across a large number of machines. Systems that compute SQL analytics over geographically distributed data operate by pulling all data to a central location. This is problematic at large data scales due to expensive transoceanic links. So implement Continuous Hive (CHIVE) that facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to structure the data and query the data using a SQL-like language called HiveQL and it optimizes query plans to minimize their overall bandwidth consumption. The proposed system optimizes query execution plans and data replication to minimize bandwidth cost.
The growth of software engineering can justifiably be attributed to the advancement in Software Testing. The quality of the test cases to be used in Software Testing determines the quality of software testing. This is the reason why test cases are primarily crafted manually. However, generating test cases manually is an intense, complex and time consuming task. There is, therefore, an immediate need for an automated test data generator which accomplishes the task with the same effectiveness as manual crafting of test cases. The work presented intends to automate the process of Test Path Generation with a goal of attaining maximum coverage. The work presents a technique using Cellular Automata (CA) for generating test paths. The work opens the window of Cellular Automata to Software Testing. The approach has been verified on programs selected in accordance with their Lines of Code and utility. The results obtained have been verified.
QOS Ranking Prediction For Cloud Brokerage Services
Muthulakshmi. Mr.Rama Doss.P
Cloud computing is becoming popular. Build high-quality cloud applications is a critical research problem. QoS rankings provide valuable information for make optimal cloud service selection from a set of functionally equivalent service candidates. To obtain Qos values real-world invocations the service candidates are usually required based on the Cloud Broker. To avoid the time consuming and expensive real-world service invocations, It proposes a QoS ranking prediction framework for cloud services by taking an advantage of the past service usage experiences of other consumers. Our proposed framework requires no need additional invocations of cloud services when making QoS ranking prediction by cloud broker service provider. Two personalized QoS ranking prediction approaches are proposed to predict the QoS rankings directly based on cost and ranking. Comprehensive experiments are conducted employing real-world QoS data, including 300 distributed users and 500 real world web services to all over the world. The experimental results show that our approaches outperform other competing approaches.
Integration Of Big Data And Cloud Computing To Detect Black Money Check Rotation With Range Aggregate Queries
K.Nithiya ,S.Balaji
— A Cloud is expanding from application aggregation and sharing to data aggregation and utilization. To make use of data tens of terabytes and tens of beta bytes of data to be handled. These massive amounts of data are called as a big data. Range-aggregate queries are to apply a certain aggregate function on all tuples within given query ranges. Fast RAQ first divides big data into independent partitions with a balanced partitioning algorithm, and then generates a local estimation sketch for each partition. When a range-aggregate query request arrives, Fast RAQ obtains the result directly by summarizing local estimates from all partitions & Collective Results are provided. Data Mining can process only Structured Data only. Big Data Approach is spoken all over the Paper. They insist of Three Tier Architecture, 1. Big Data implementation in Multi System Approach, 2. Application Deployment - Banking / Insurance. 3. Extraction of Useful information from Unstructured Data. We implement this Project for Banking Domain. There will be Two Major Departments. 1. Bank Server for Adding New Clients and maintaining their Accounts. Every User while Registration has to provide their aadhar card as a ID Proof to create Account in any Bank. 2. Accounts Monitoring Sever will monitor every users and their Account Status in different Banks. This Server will retrieve users who maintain & Transact more than Rs. 50,000 / Annum in all 3 Accounts in different Banks using the same ID Proof. Map & Reduce is achieved.
Duality Of Iterative Soft Dilation In Multi Scale Soft Morphological Environment
Kompella Venkata Ramana
In this paper, duality is discussed in soft dilation in multi scale as well as iterative environment. Soft erosion and soft dilation will exist for various thresholds. So soft open and soft close also exist for various thresholds. If definition for soft erosion and soft dilation are studied (5), then some type of equalities are viewed among soft morphological operations. So equality may be established in between soft erosion and soft dilation in multi scale environment. open and close are composite operations. So soft open and soft close are also composite operations which will exist at various thresholds. Equality may be viewed among all soft morphological operations. In the same way duality also will exist for all soft morphological operations because duality will exist for all morphological operatons,for example dual of erosion is dilation ,dual of dilation is erosion,dual of open is close and dual of close is open. In this paper, duality is discussed in soft erosion operations,in multi scale as well as iterative environment.. A very important point is that equality does not exist in mathematical morphology but will exist in soft mathematical morphology. So various duals will occur for each soft morphological operation
Arjun Srinivas Nayak ,, Ananthu P Kanive, Naveen Chandavekar Dr. Balasubramani R
Survey on Pre-Processing Techniques for Text Mining
Arjun Srinivas Nayak ,, Ananthu P Kanive, Naveen Chandavekar Dr. Balasubramani R
: Data Mining is a versatile sublet in the field of computer science. It is the computational evolution mode of detecting patterns in large data sets. This paper give an indication on the different pre-processing techniques to mine text data. Text mining applications include – Information Retrieval, Information Extraction, Categorization, and Natural Language Processing. The pre-processing of text mining starts with Tokenization, followed by Stop-word removal and finally stemming. This paper evaluates Porter’s and Krovetz algorithm, highlighting their applications and drawbacks.
Heterogeneous Phase-Level Scheduling With Jobs Execution Scheduling Algorithm To Enhance Job Execution And Resource Management In Mapreduce
M.Sneha Priya , Mrs.R.Rebekha
In Big Data, Map Reduce is a technique that helps to process the query from user to server in an efficient way. The Map Reduce is used to process large amount of servers in a parallel way. Hence the parallel processing is done in map reduce to retrieve results. To achieve this parallel processing, the jobs are split into 3 phases. Each phase is provided with resources for parallel and fast execution of jobs. If the resources are provided in a homogeneous way, it takes more time to complete a task.Now Heterogeneous phase level scheduling algorithm with Jobs Execution Scheduling is used to split the resources in heterogeneous way. This helps to achieve jobs to be execute greater with effective use of resources which improves speed and showing the resource usage variability within the lifetime of a task using a wide-range of Map Reduce jobs. This Scheduler improves execution parallelism and resource utilization without introducing stragglers. Energy-Efficient Algorithm provides the flow time of a job. Flow time of a job is the length of the time interval between the release time and the completion time of the job with work efficiency.
Review On Secure Anti-Collusion Data Sharing For Dynamic Group In The Cloud
Rahul S.Nandanwar, Vijendrasinh P.Thakur
v
Benefited from cloud computing, users can achieve an effective and economical approach for data sharing among group members in the cloud with the characters of low maintenance and little management cost.
In multiuser cloud computing there may be a major problem to securely share documents. Frequent change of membership, challenging issues to prevent the system from collusion attack, to secure the system from the revoked user. In this paper we propose a secure data forwarding mechanism for dynamic member. Firstly, we propose a cloud system in which no of server should be present any user must store the file in any server. Secondly, the file must upload in no of blocks in the same server. Thirdly, the data forwarding; the uploading user may forward the data to the requested user in the cloud. If the member of cloud should exchange the information to one another they forward the data to the id of the members. Other member can’t access file in the cloud without permission of file up loader. By this scheme the revoked user can’t access the original data. in this scheme all the time the cloud member get permission from the up loader member for access the information at the time of file transfer up loader know the requested id he provide the server id and no of block to the downloader. The file should store in the server in maximum four no of block i.e. file is splitting in four parts. Each two block should be encrypted and store in the server. RSA and MD-5 encryption algorithm should be used for encrypting four blocks. In multiuser environment user doesn’t know which encryption will used for which block.
We think is that what we can access easily on the internet is whole of the information available. But actually the amount of information available is much larger in volume. Because the web is divided into two parts surface web which we can access easily and the deep web which is invisible to us as the search engines are unable to index it directly. To handle this huge volume of information, Web searcher uses search engines. Buthidden web contains a large collection of data that is unreachable by normal hyperlink-based search engines. To access this content, one must submit valid input values to the HTML form. The resultant web pages from the previous step are stored and then to improvise the search results for a user query we need indexing techniques. Indexing of hidden web is done to reveal the relevance of the document according to the context of the search. The objective is to find out different ways for indexing the web pages from the hidden web using ontology and analyze the different techniques that are previously proposed. The main advantage of storing an index is to optimize the speed and performance while finding relevant documents from the search engine storage area for a user given search query.
H. T. Rathod,*, K.V.Vijayakumar A. S. Hariprasad K. Sugantha Devi
Gauss Legendre Quadrature Formulas for Polygons by Using a Second Refinement of an All Quadrilateral Finite Element Mesh of Triangular Surfaces
H. T. Rathod,*, K.V.Vijayakumar A. S. Hariprasad K. Sugantha Devi
This paper presents a numerical integration formula for the evaluation of, where and is any polygonal domain inThat is a domain with boundary composed of piecewise straight lines.We then
express
in which is a polygonal domain of N oriented edges with end points and. We have also assumed that can be discretised into a set of M triangles, and each triangle is further discretised into three special quadrilaterals a=0,1,2 which are obtained by joining the centroid to the midpoint of its sides. We choose an arbitrary triangle with vertices in Cartesian spaceWe have furthered refined this mesh two times.The first refinement of this mesh was considered in our recent work.In this study we propose a second refinement of the above stated all quadrilateral mesh.
We have shown that when, the triangle is divided into three quadrilaterals ,e=1,2,3 an efficient formula for this purpose is given by where, u= u(ξ,η)=(1-ξ)(5+η)/24,v=v(ξ,η)=(1-η)(5+ξ)/24
Image Enhancement using Contrast Limited Adaptive Histogram Equalization and Wiener filter
Mithilesh Kumar , Ashima Rana
This paper presents the Image Enhancement using Contrast Limited Adaptive Histogram Equalization method and Wiener filter to remove the noise that might be presented in image. Gamma correction technique is used to transfer the image in to a suitable dynamic range. To avoid amplifying any noise that might be present in an image we use contrast limited adaptive histogram equalization parameter to limit the contrast especially in homogeneous area.
A new number system, Perfect Difference Number System (PDNS), based on the mathematical notion of Perfect Difference Sets (PDS) is proposed. It is expected that PDNS will take its place in the family of other number systems and will benefit computer theory and applications
Web services are important integrated software components for the support of interoperable machine to machine communication over internet. Web services have been widely used for building services oriented applications in industry in recent years. On internet publically available web services are widely increased in recent years. For the web services users it is very hard to select a proper web services among a large amount of services available on internet.Improper selection of web services may cause many problems (e.g., ill-suited performance) to the resulting applications. In this paper we proposed Collaborative Filtering (CF), we propose an innovative CF algorithm for Q-o-S-based web service recommendation. we provide a personalized map for browsing the recommendation results. The map explicitly shows the QoS relationships of the recommended web services as well as the basic structure of the QoS space by using map metaphor such as dots, areas, and spatial arrangement.
The modern era is moving towards computing devices that have a high degree of mobility. The drawbacks from the security features that has to be handled is that mobile devices need to be power conscious and it can also be easily stolen due to less weight and small form factor. Due to limited power availability the amount of cryptographic computation should be minimized as these consume more power. The other major drawback from the security angle on physically missing the compute device, the data stored in both dynamic and static storage of the device can be retrieved by unauthorized persons leading to breach of privacy and confidentiality. This paper explains the modern compute device mechanisms using which the privacy and confidentiality concerns be addressed with limited amount of usage of cryptographic functions. The limited usage of cryptographic functions also has an advantage of low battery consumed.
Access to higher education among the paudibhuyan tribe in Barkote block, Deogarh District in Odisha: A sociological study
Sudipta Pradhan
The name of the proposal topic is “Access to higher education among the paudibhuyan tribe in Barkote block, Deogarh District in Odisha: A sociological study”.
Not long ago the idea of tribe, region, and nation conveyed a single complex whole and thus, each could be comprehended if studied along with the others; each term simultaneously incorporated social, geographical and political dimensions. Hence even analytically these were hardly distinguishable. This merely meant that a given socio-cultural collectivity or closely related socio-ethnic categories occupied an ecological territory and had a kind of political structure to manage the system as well as interactions with outsiders, or the “others “. But like all concepts and terms ,they too had to adjust newer political and ideological exigencies .Small wonder the terms currently used are often in comprehensible.
Tribal society is changing. Many sociologist are studies this changing form of tribal society .the change of tribal society is analysis on the basis of the continuum studies in the different countries in this world, among them one of the sociologist studies this changes or development of the tribal society is Robert Redfield Brown and he developed a series of studies on Mexico village, “Rural-Urban-Continuum”.
According to 2011 census report Odisha literacy rate 74.45% and paudibhuyan tribal people literacy rate is 19.24%. Around 30% of schools in these remote areas villages have no teachers or either one teacher are there. More than 75% of paudibhuyan villages are not connected to a road. All 36 paudibhuyn villages are located in remote forested and hilly areas.
Paudibhuyan depends on forest and forest product, eating different fruits, tubers, leaves and seasonal foods like mushrooms, jackfruits, mangoes, bamboo roots etc. They also grow crops like millets, pulses and some paddy if the rainfall is good and timely. Traditionally practitioners of shifting cultivation, most of them have now settled down in permanent habitat. However, the lack of irrigation, uncertain rainfalls and depleting forest and natural resources have resulted in acute shortage of food in the last few years. As a result large number of women and more than 70% children in the villages are malnourished.
Database Management System And Information Retrival
Shubham S. Tade , Tejashri A. Kapse
A database management system (DBMS) is a computer softwareapplication that interacts with the user, other applications, and the database itself to capture and analyze data. It allows organizations to conveniently develop databases for various applications by database administrators (DBAs) and other specialists. Information retrieval (IR) is the activity of obtaining information resources relevant to an information need from a collection of information resources. Searches can be based on or on full-text (or other contentbased) indexing. This was driven by the increasing functional requirements that modern full text search engines have to meet. Current database management systems (DBMS) are not capable of supporting such flexibility. However, with the increase of data to be indexed and retrieved and the increasing heavy workloads, modern search engines suffer from Scalability, reliability, distribution and performance problems. We present a new and simple way for integration and compare the performance of our system to the current implementations based on storing the full text index directly on the file system. Keywords: Full text search engines, DBMS IRS, DBMIRS, scalability.
An Automated Medical Support System based on Medical Palmistry and Nail Color Analysis
Shubhangi Meshram, Anuradha Thakare
In an Automated Medical Palmistry System (AMPS) as an application of digital image processing and analysis technique can be useful in healthcare domain to predict diseases for human being. The proposed work is based on design and implementation of an automated system to detect various health conditions of patients. The images of human palm and nails will be the input to the system. By applying digital image processing techniques on input images, certain features in the image will be identified. The knowledge base of medical palmistry will be used to analyze certain features in images and prediction probable disease.
In mobile ad hoc networks (MANETs), the provision of quality of service (QoS) guarantees is much more challenging than in wire line networks, mainly due to node mobility, multi hop communications, contention for channel access, and a lack of central coordination. The difficulties in the provision of such guarantees have limited the usefulness of MANETs. In the last decade, much research attention has focused on providing QoS assurances in MANET protocols. In this paper we have analysed different types of routing protocols and QoS metrics in MANETS. Trust based AODV (TAODV) routing protocol modify the effect of AODV routing protocol. In TAODV is preferred to achieve best result in terms of end-to-end delay, energy packet delivery ratio and throughput.
Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. The large number of web applications generate knowledge base data which lead to major entity linking research. We present the overview of analysis and main approach to extract data from the raw data and liking the entities with context.
Effect of Transformer Connection on Protection of Feeder Connected With Distributed Generators
N.Y. Savjani
¾ Energy demand is expected to grow very rapidly and Distributed Generation (DG) or the alternate energy systems is expected to play an increasing role in the future of the power systems. The DG is defined as small-scale generation and can be connected at load points. DG is increasingly becoming more important in the power system because of its high efficiency, small size, low investment cost, modularity and most significantly, its ability to exploit renewable energy resources. After connecting distributed generation, part of the system may no longer be radial, which means coordination might not hold good as the traditional radial nature of the power distribution system is now converted into a multi source unbalanced network. DG may bring about reliability degradation, instead of reliability enhancement, due to relay-relay miscoordination. Current literature covers issues related with DG connection & mainly impact of DG interfacing transformer connection on IDMT over current relay. This is illustrated by test results on a system simulated using PSCAD.
Comparative study of various classification algorithms combined with K means algorithm for Leaf Identification
Nisha Pawar, Dr.K.Rajeswari
Plants play a vital role in our daily life. Plants are a great source of medicine for many diseases. Due to their fewer chances of side effects on human body and also better compatibility with humans, using plant for treating diseases is considered to be safer. Other items like paper, bio-diesel are also obtained by using plant material. Hence identification of plants is a very important task helpful for various areas such as Agriculture, Ayurveda, Botanical research, Biological research, etc. Leaf based features can be used for appropriate results than other parts of the plant in the plant identification. Manual leaf identification is very difficult and time consuming task. To implement automatic leaf identification, classification techniques like Naïve Bayes classification, Neural Network, etc can be used. To get the leaf based features, image processing techniques are applied on the image of leaf. After leaf based feature extraction, a plant’s leaf is classified based on the leaf features. The main objective of this paper is to present a survey of different classification methods for plant leaf identification. At the end, this paper concludes better classification method with more accuracy when compared to other classification methods.
Simulation of AODV and QoS Based CBRP Based WMNProtocol Routing with Varying Pause Time using NS-2
Divya Kohli, Prof.AjayLala Prof. Ashish Chaurasia
The wireless mesh network is a new emerging technology that will change the world of industrial networks connectivity to more efficient and profitable. Mesh networks consist of static wireless nodes and mobile customer; have emerged as a key technology for new generation networks. The Quality of Service (QOS) is designed to promote and support multimedia applications (audio and video), real time. However guarantee of QoS on wireless networks is a difficult problem by comparison at its deployment in a wired IP network. This paper focuses to enhance an efficient Cluster Based Routing protocol Q-CBRP for random mobility model for mesh clients using varying traffic such as HTTP , FTP and video streaming and measure the different criteria of QoS in a mobile WMN
Gurmeet Kaur Lamba, Prof. Ashish Chaurasia, Prof.Ajay Lala
Varying Number of Selfish Nodes based Simulation of AODV Routing Protocol in MANET using Reputation Based Scheme.
Gurmeet Kaur Lamba, Prof. Ashish Chaurasia, Prof.Ajay Lala
Unlike in fixed networks the Mobile Ad Hoc networks needs more security mechanisms. Attackers may intrude into the network through the subverted nodes. The network topology is highly dynamic as nodes frequently join or leave the network, and roam in the network. In spite of its dynamic nature, mobile users request security services as they move from one place to another. The security solution should protect each node in the network and the security of the entire networks relies on the collective protection of all the nodes. The security solution should protect the network from both the inside and outside intruders into the system. In this paper we implemented AODV protocol based on reputation scheme to detect the selfish node and the evaluation will be done through performance metrics (Packet delivery ratio, Average end to end delay, Average Throughput) in Network Simulator
For intelligent vision based human computer interaction and research works images having faces are indispensable. Images containing faces are also essential for face processing which includes face recognition; face tracking, pose estimation and expression recognition. Face recognition is one of the biometrics methods which are used in identification of faces. Biometrics means life measure. In biometrics there are various criteria for recognition of individuals like finger printing, iris, voice, face etc. However human faces are easily collectible under various situations. So we use face recognition as a biometric measure. There are various factors which affect the recognition of faces. These factors are illumination, pose variation, resolution etc. For security purposes face recognition is important in many places like organizations, air ports, crime scene etc. It is also used in civil applications and law enforcement. Variations in poses have been major challenges in face recognition. This paper presents the method of face recognition which solves the problem of pose variation.
Image Steganography with the password authentication using CHAP protocol
Pooja S Devagiri, Dr S L Desphande
The growth of technology to exchange information in the field of computer network like Internet, mobile communication has been increased and for exchange of information between two users Internet is used as the main source and this exchange of information is not safe which has led to many serious problems such as hacking and duplications. Consequently it is important to give a security of data.
Performance of image denoising algorithm is critically dependent on the accuracy of noise level estimation. In the estimation process, restricting the influence of image content on the estimation dataset is a major challenge. In this paper, we propose a novel method which involves the application of Frobenius norm as the basis of content energy measurement. It involves the truncation of SVD (Singular Value Decomposition) components thereby effectively restricting the influence of image details. Application of linear regression for determining content parameter enhances the application scope of the proposed method. The experimental results demonstrates the effectiveness of the proposed approach.
The Growth of the Cloud Computing invention suggests attractive and innovative computing services through resource pooling and virtualization techniques. Cloud gives delivers different types of Computing Services to their consumers according to their pay per usage of economic model. Anyhow this new technology introduces new immerse for enterprises and bussiness admired their security and privacy. The new cloud services are Security as a service (Saas). This model is mainly for Security enhancemet of a cloud environment. This is a way of gathering security solution under the control of security specialists. Identify and Access control services are the area of security, and sometimes are presented under the term identiy as a service.
Effect on Information Retrieval Using One and Two Point Crossover
Manoj Chahal .
Information in digital world growing at very high speed. It is impossible to retrieve relevant and important information. To retrieve relevant information search engine is used. Genetic Algorithm and Information Retrieval System is used by search engine in order to retrieve relevant information. But retrieving information as user requirement is still difficult. In this paper information is retrieve using Genetic algorithm with one and two point crossover and cosine and Horng & Yeh similarity Function is used as fitness function in Genetic Algorithm.
Keywords: Genetic Algorithm, One point crossover, Two point crossover, Information Retrieval, Vector Space Model, Database, Similarity Measure.
Detection of Outlier in Uncategorical Dataset using Hybrid Algorithm
Ms.Kanchan D.Shastrakar, Prof.Pravin G.Kulurkar
Outlier mining is an important task of discovering the data records which have an exceptional behavior comparing with other records in the remaining dataset. Outliers do not follow with other data objects in the dataset. There are many effective approaches to detect outliers in numerical data. Most of the earliest work on outlier detection was performed by the statistics community on numeric data. But for categorical dataset there are limited approaches By using NAVF (Normally distributed attribute value frequency) and ROAD (Ranking-based Outlier Analysis and Detection algorithm) and new hybrid approach for outlier detection in categorical dataset will be formed.
ReCoCa Scheme For Wireless Multimedia Sensor Networks
Deepti Gangwar .
In recent years, rapid growth in low-power circuitry and in cheap complementary metal-oxide semiconductors (CMOS) fosters the growth of a new type of sensor nodes that can retrieve multimedia rich contents such as videos, audio streams, and images along with the ordinary data. These new sensor nodes which were equipped with cameras and microphones are known as multimedia sensors. Wireless multimedia sensor networks (WMSN) have very wide range of applications ranging from environmental monitoring, traffic surveillance and control systems to the healthcare systems and the battel field application. All these applications operate in real time. These applications require information in real time with lesser delay, sensors communicate with each other to provide the required information in lesser time.
A region-based co-operative caching (ReCoCa) scheme for WMSN considers each node differently according to the contention introduced by each link. The proposed method selects some notes on the basis of their traffic flow and contention value. A cashing discovery mechanism is proposed which searches the requested data item based on the region. A value based cache replacement method which deletes data items according to the value of the function is also proposed. Simulation results demonstrate the effect of number of notes and Node density in the network over cache overhead and effect of data availability over average delay to serve a request.
Resilience to packet loss is a critical requirement in predictive video coding for transmission over packet-switched net-works, since the prediction loop propagates errors and causes substantial degradation in video quality. This work proposes an algorithm to optimally estimate the overall distortion of decoder frame reconstruction due to quantization, error propagation, and error concealment. The method recursively computes the total decoder distortion at pixel level precision to accurately account for spa-tial and temporal error propagation. The accuracy of the estimate is demonstrated via simulation results. The estimate is integrated into a rate-distortion (RD)-based framework for optimal switching between intra-coding and inter-coding modes per macroblock. The cost in computational complexity is modest. The framework is fur-ther extended to optimally exploit feedback/acknowledgment in-formation from the receiver/network. Simulation results both with and without a feedback channel demonstrate that precise distortion estimation enables the coder to achieve substantial and consistent gains in PSNR over known state-of-the-art RD- and non-RD-based mode switching methods
Best Customer Services among the E-Commerce Websites – A Predictive Analysis
M.Emerentia, N. Yuvaraj,
Online shopping websites have become popular as they allow customers to purchase the products easily from home. These websites requests the customers to rate the quality of the products. To enhance the quality of the products and services these reviews provides different features of the products. In order to buy a product a customer has to go through large number of reviews, which is quite hard for the customers as well as the marketers to maintain, keep track and understand the views of the customers for the products. In this project I have designed a consolidated website where I have the comparison data about the e-commerce website services such as Price, cash on delivery, Product replacement etc., and gave the predictive result about the shopping website which gives the best cutomer services. This can ease the purchasing time of the customer instead of comparing the shopping websites to review about the product and finding the best e-commerce website to purchase the product.
Ramchandra Desai ,Anusha Pai Louella Menezes Mesquita e Colaco
Real Time Analysis using Hadoop
Ramchandra Desai ,Anusha Pai Louella Menezes Mesquita e Colaco
: This is an age of BIG DATA. There is huge development in all science and engineering domains which expands big data tremendously. Tweets are considered as raw data and such a huge amount of raw data can be analyzed and represented in some meaningful way according to our requirement and processes. This work provides a way of sentiment analysis using apache hadoop which will process the large amount of data on a hadoop and storm faster in real time. Apache kafka is used here along with apache hadoop and storm as a queuing system. Sentiment analysis of the tweets from the twitter is done to know who the favourite in the tournament is.
An Optimized and Secure Association Rule Mining Algorithm in Parallel and Distributed Environment
Sakharam.B.Kolpe , N.V.Alone
Data Mining is a subfield of Artificial Intelligence which is mainly used for extraction of hidden predictive information from large available databases. To Handle Large Data set is very tedious and complex task. The solution to this problem is to apply distributed or parallel approaches. The field of distributed data mining has therefore gained increasing importance in the last decade. In Distributed data mining Security is the major problem with respect to association rule mining projects. Association rule mining is become one of the core data mining tasks and has attracted tremendous interest among data mining Researchers. The Apriori algorithm by Rakesh Agarwal has emerged as one of the best Association Rule mining algorithms. It also serves as the base algorithm for most parallel algorithms. The performance of data mining algorithm can be accelerated from O(N) to O(N/k) with parallelism, where N = number of data records and k =number of nodes in distributed system. By using various cryptographic techniques and Method interesting associations and patterns between variables of big database can be observed securely. This system addresses the problem of Secure association rule mining over the horizontally distributed database system. In Current technique for association rule mining huge processing time and also high threat from various attacks. The proposed system can provide security at the time of mining data from various data sources and also reduce computation time by using parallel programming environment like OpenMp. The goal of proposed approach is to find all association rule with maintain support s and confidence c and minimize the information disclosed about the private databases held by those player .This system is based on distributed mining algorithm, K&C and AES algorithm. Distributed mining algorithm used here is the distributed version of apriori algorithm. With proposed approach speed up is acquired using Parallel Computing while preserving the privacy of the data .
Survey :Various Prediction Algorithms for Chemical Bond Formation on GPU
Manjiri K. Kulkarni, Dr. J. S. Umale
Nowadays, the persuit of prediction of chemical bond formation among researchers is evident to discover new drugs or for new discovery of chemicals. Predictive modeling gives statistics to predict outcomes. Most of time the event one wants to predict is in the future, but predictive modeling applicable to any type of event, regardless of when it occurred. In this paper, we focused on comparative analysis of various Prediction algorithms to estimate the best algorithm for the prediction of chemical bond formation with observations. The performances of these techniques are compared and it is observed that parallel Genetic Algorithm provides better performance results in accuracy and speedup as compared to other prediction techniques on GPU.
The increasing interest in development, operation and integration of large amount of renewable energy resources like offshore wind farms and photovoltaic generation in deserts leads to an emerging demand on development of multi-terminal high voltage direct current (MTDC) systems. Due to preformed studies, voltage source converter based HVDC (VSC-HVDC) system is the best option for realising future multi-terminal HVDC system to integrate bulk amount of energy over long distances to the AC grid. The most important drawback of VSC -HVDC systems is their need for fast HVDC circuit breakers. This paper aims at summarizing HVDC circuit breakers technologies including recent significant attempts in development of modern HVDC circuit breaker. A brief functional analysis of each technology is presented. Additionally, different technologies based on derived information from literatures are compared. Finally, recommendations for improvement of circuit breakers are presented.
Automatic Query Formulation for Extracting Hidden Web: A Review
Manvi Siwach, Swati Gogia
There is lot of data on the internet which is not indexed by our conventional search engines. This web content is what we call as Hidden web or Deep web. Users have to fill various forms to access this hidden content. So, there is a need to make an interface which help us to automatically fill the forms to access the hidden data. This can be done by extracting attributes from html pages and compare those attributes with users query and resulting attributes would be used to fill those forms automatically. For best results, ontology, specifically domain ontology can be used for filling the forms of hidden web.
Iterative Success Variety Method for Joined words in Tamil Language
D.Umamageswari , K.Kumanan
— Stemmer plays a vital role in Natural Language Processing applications, where it is used to improve the accuracy of applications like Information Retrieval System(IRS) for indexing based on recall/precision factors, POS Tagger, Search engines and so on. Stemmer is a pre-processing step to squeeze out the root or stem or base of the words. Stemmer for class of words like nouns i.e plural form or verbal form of simple words is available. This paper presents a stemmer for joined words or compound words which is again a class of words formed as, the beginning character of the next word, formed with some form of the ending character of first word. It uses the dictionary of root or stem or base words to squeeze out the vital part of the word. It produces better accuracy when there are large numbers of root or stem or base word in the dictionary. It is working as context-based stemmer where it selects the second root or stem or base of the word based on context. So, it needs to collect the vital part of the words in the class of nouns and verbs