Comparitive Study on Sentiment Analysis Techniques and User Behavior Prediction on Twitter Data
Aarabhi Babu Dr.Vince Paul
People share knowledge, experiences and thoughts with the world by using Social Media like blogs, forums, wikis, review sites, social networks, tweets and so on. This has changed the manner in which people communicate and influence social, political and economic behavior of other people in the Web.This work mainly focus on different sentiment analysis techniques a comparative study of different sentiment analysis techniques and a proposed model that uses sentiment analysis on twitter data and user behavior prediction Proposed System mainly rely on Twitter Data.Performing sentiment analysis on twitter data and predicting the behavior of the tweets and thus the user who post those tweets. By analyzing tweets on certain groups the behavior of those groups can also be identified. Social networking websites are considered as major sources of opinions and views of the public on the prevalent social issues at a given point in time. Websites like the Twitter reflect the public views through its millions of messages posted by its users world wide,.By conducting survey these data set required for training data set is being created.and by using these training set twitter commands are analysed and thus the behavior of each tweets are extracted.Based on these tweets if any malicious activities are going on they can be detected and banned.
Identification and Removal of Interfernce in Aspectj Programs
G. Barani V. Suganya S. Rajesh
The motivation behind introducing Aspect Oriented Programming (AOP) has been to increase the modularity of software by allowing a clear separation of core and cross-cutting concerns in software. AspectJ is a common AO programming technique used by programmers with excellent support from the Eclipse community. In AspectJ, complex interactions between the base code and aspects can make the code very difficult to understand and maintain. Added to this, there is also a possibility for the occurrence of interference between cross-cutting functionalities offered through advices and woven at the join points in AspectJ software. These interferences cannot be identified by the developer without a proper analysis on its existence. In order to address the problems arising out of interferences in AspectJ programs, this paper summarizes the work done to provide capabilities for the definition and identification of the rules of violation. A tool has been developed to define and identify interferences and also to provide possible solutions for the removal of interferences in a given AspectJ code.
Analysis and Evaluation of Techniques for Managing Unstructured and Semi-Structured Data in a MapReduce Platform
Dina Darwish
The increasing demand for large-size data mining and data analysis applications drives both industry and academia to create new types of highly scalable data-intensive computing platforms. MapReduce is one of the most popular platforms in which the dataflow is in the form of a directed acyclic graph of operators. This paper presents a modified version of the MapReduce framework that is developed to manage unstructured and semi-structured data. Since, almost most kinds of database systems are designed to manage well-structured data requiring users to design a schema before storing and querying data. However, there are significant amount of unstructured data and semistructured data that cannot be effectively managed this way. In this paper, we develop the engineering principles and practices to manage unstructured and semi-structured data in a MapReduce platform. Having a single data platform for managing both well-structured data, unstructured and semi-structured data is beneficial to users; this approach reduces significantly integration, migration, development, maintenance, and operational issues. The Hadoop environment is used to write SQL/XML schemas first, then, all commands are translated to Hadoop as MapReduce jobs. The efficiency of using this method in MapReduce software is discussed and evaluated.
Cloud Computing can be described in terms of services which is being provided to users “pay as you go”. As a new technical field cloud is gaining attention due to continuous growth . With the help of internet, data and resources are shared between machines . As with cloud computing , if database services are being provided through it , then it can be called as Cloud Database . Nowadays DBaaS is upcoming new service in cloud computing scenario because more and more people now wish to save database work to cloud as they are affordable on low cost.
A Unified Framework for Both Graph and Pattern Matching Using IPaMWAG
Mrs. R.Hemalatha , Ms.M.Aarthi
In general the matter of finding sub graphs that best match a user’s query on weighted Attributed Graphs (wags) is an open research area. There is a tendency to outline a WAG as a graph where every nodes exhibit multiple attributes with varied, non-negative weights. An example of a WAG could be a coauthor ship network, wherever every author has multiple attributes, every such as a specific topic (e.g., databases, data processing, and machine learning), and therefore the quantity of experience in a very specific topic is delineate by a non-negative weight on it attribute. A typical user query during this setting specifies each property patterns between query nodes and constraints on attribute weights of the query nodes. A ranking perform that unifies the matching on attribute weights over the nodes and on the graph structure is proposed. To prove that the matter of retrieving the best match for such queries is complete. Moreover, there is a tendency to propose a quick and effective top-k pattern matching algorithm and top-k graph search algorithm for weighted attributed graphs. In an intensive experimental study with multiple realworld datasets, the projected algorithm exhibits important speed-up over competitive approaches. On average, projected technique is quicker in query process than the strongest competitive technique.
Survey on Emerging Security Mechanisms for Medical Cyber Physical Systems
Anupama C V # Misha Ravi
Medical Cyber Physical System (MCPS) is able to transmit and process the data collected from health monitoring systems, which consists of BAN. The acquired data is transmitted to private or public cloud which contains set of algorithms for analysing the patient data. These medical data should be kept secret. After analysing these data, the feedback is given to the doctors to take corrective action. This system includes data acquisition which is capable of acquiring data from body area networks, data aggregation which concentrate the gathered signal information, cloud processing which includes many analysis algorithms and action layer which produce either physical action or decision support. Each of its layer contain hardware such as sensors and cloud computing architectures etc. So, the data should be secured in each layers. So many normal encipher mechanisms are used. Also, emerging encryption technology such as homomorphic encryption standards used in MCPS.
The main objective of this project is to add mobility and automation to the process of managing student information in an institute. In a real world scenario, such as college campus, the information is in the form of notice, hand-written manual, verbal message, is being spread among the students. Today it is of the essence to not only use the predictable forms of statement, but also new forms such as cell phone technology, for faster and easier communication among the students. The approach of communication is Android. The core idea of this project is to implement android based Mobile Campus application for advancement of institution and educational system .The application will be used by students, teachers and parents. In the previous system, all the information has to view in a hard file, or in website. At the same time while searching any information it is too difficult to access and takes a lot of time to search the particular website. Hence, in order to overcome this problem a smart phone based application using Android can be used to make this process easier ,secure and less error prone. More efficient information’s will be achieved through this system. When sensitive data is stored on the device, apps can ensure that they are stored securely using encryption. Apps also exchange sensitive information with remote servers. The Android platform provides a number of algorithms for encrypting sensitive information.Some of these algorithms provide stronger cryptographic guarantees in protecting data than others. Cryptographic algorithms are harder to break when there is more unpredictability in the random numbers generated for use in encryption. A way of introducing unpredictability in Android is to use the Secure Random class. The need for encryption is twofold. Firstly, encryption makes it difficult to read and use any sensitive information that an app stores on a device. Secondly, encryption adds an additional layer of security to sensitive information that is exchanged between apps and remote server .
A Web Navigation Frame Work to Identify the Influence of Faculty on Students Using Datamining Tecniques
Rekha Sundari.M, Srininvas.Y Prasad Reddy.PVGD
Analyzing student web browsing behavior is a challenging task. This paper mainly focuses on a methodology to identify the influencing factor that has driven a student towards navigating a particular web site. Most of the research in this direction is untouched to estimate the influence of faculty on the student’s behavioral patterns. In this work we focus on a novel statistical approach based on adaptive Gaussian mixture model, where the data clustered is given as input to the model to classify the student navigating pattern. The concept of regression analysis is used to find the relationship between student’s navigational behavior and faculty’s experience and rating. This article considers a real time dataset of GITAM University for experimentation.
Securing E-healthcare records on Cloud Using Relevant data classification and Encryption
Rizwana Shaikh Jagrutee Banda Pragna Bandi
Information security is always the area of concern for cloud users. The confidentiality of the Electronic Health Records (EHRs) is major issue when commercial cloud servers are used by hospital staff to store the patients’ medical records because it can be viewed by everyone. There are various issues and challenges toward achieving detailed data access control based on cryptography. To achieve fine grained and scalable data access control for medical records stored in cloud servers, we propose Attribute Based Encryption (ABE) techniques such as key policy attribute based encryption, role based encryption, etc. to encrypt each patient’s medical record file. For this we describe an approach which enables storage which is secure and patient’s health data with controlled sharing. We explore key-policy attribute based encryption to gain patient access control policy such that everyone can download the data, but only authorized user can view the medical records. A high degree of patient privacy is maintained using multiple cryptographic algorithms applied on the various types of data
Survey on Matching Of Users in Social Networks Using Friend Book
Ms. D.Saral Jeeva Jothi Ms.R.Ramyadev
People use various social network sites for different purposes. Social networks provide an important source of information regarding users and their interactions which is very valuable for identifying the identical users and recommender systems. In this survey paper we aims to address the identical user identification problem and recommending friends based on lifestyle of the users in social networking sites (SNS). A methodology called MOdeling Behaviour for Identifying Users across Sites (MOBIUS) is used for finding a mapping among identities of individuals in social media sites. Recommender systems or recommendation systems are a subclass of information filtering system that search for to predict the 'preference' or 'rating' that a user would give to an person/item/place/thing. Social Networking services focuses towards suggesting friends based on Users Social Graph or Geolocation based, which does not take user‟s liking, disliking etc. This survey also investigates about an app that utilizes the information of the user and makes recommendations by considering user‟s point of interest and calculating the similarities between each user, thus recommending the friends to the user in heterogeneous sites.
Efficient Data Transmission with Barcode Modulation Based On DPSKOFDM
Neerudu Uma Maheshwari
In digital communication for transmission of very high data in secure and efficient way is the new approach for wireless communication system. So many techniques are developed to transmit the data in secure as well as in small space but all techniques are having some drawbacks. Large data encryption in small space and data privacy is two drawbacks leads to manipulations in handheld mobile transmissions. For transmission purpose we used image processing as well as communication concepts . In this paper we used DWT for barcode modulation in handheld devices to transmit the very high data through DPSKOFDM. Using DWT in barcode modulation for data transfer we are getting high accuracy and low complexity which can be shown by SNR vs. BER performance.
A Study on Prediction of User Behavior Based on Web Server Log Files in Web Usage Mining
Anurag kumar Vaishali Ahirwar Ravi Kumar Singh
Nowadays, the growth of World Wide Web has exceeded a lot with more expectations. The internet is growing day by day, so online users are also rising. The interesting information for knowledge of extracting from such huge data demands for new logic and the new method. Every user spends their most of the time on the internet and their behavior is different from one and another. Web usage mining is the category of web mining that helps in automatically discovering user access pattern. Web usage mining is leading research area in Web Mining concerned about the web user’s behavior. In this paper emphasizes is given on the user Behaviors using web server log file prediction using web server log record, click streams record and user information. Users using web pages, frequently visited hyperlinks, frequently accessed web pages, links are stored in web server log files. A Web log along with the individuality of the user captures their browsing behavior on a website and discussing regarding the behavior from analysis of different algorithms and different methods
This project is very useful for this modern world. The best way of controlling the Traffic light system through IOT. This project is a prototype with two traffic light system. There are many platforms which provides a bridge between the user and the device. Here this device use the best and simple IOT platform “Thingspeak”. In this generation, there are many way of controlling the traffic light system like traffic density calculator. It is waste of time standing idle even at the night time when there is no heavy traffic. This project will be useful for the Traffic Authority to control the particular signals via online from anywhere. The main component is Raspberry Pi 3 and Webpage prototype. This device is designed in such a way to avoid accidental switches. It is possible to control more number of traffic lights with upgradation in python and HTML coding
Mrs. A.Jesila Banu, Mr. N.Nasurudeen Ahamed, Mr.B.Manivannan, Mrs.K.Vanitha, Dr.M.Mohamed
Detecting Spammers on Social Networks
Mrs. A.Jesila Banu, Mr. N.Nasurudeen Ahamed, Mr.B.Manivannan, Mrs.K.Vanitha, Dr.M.Mohamed
Third-party apps are a major reason for the popularity and addictiveness of Facebook. Unfortunately, hackers have realized the potential of using apps for spreading malware and spam. The problem is already significant, as system find that at least 13% of apps in our dataset are malicious. So far, the research community has focused on detecting malicious posts and campaigns.In this paper, system ask the question: Given a Facebook application, can system determine if it is malicious? Our key contribution is in developing FRAppE—Facebook‟s Rigorous Application Evaluator—arguably the first tool focused on detecting malicious apps on Facebook. To develop FRAppE, system use information gathered by observing the posting behavior of 111K Facebook apps seen across 2.2 million users on Facebook. First, system identify a set of features that help us distinguish malicious apps from benign ones. For example, system find that malicious apps often share names with other apps, and they typically request fewer permissions than benign apps. Second, leveraging these distinguishing features, system show that FRAppE can detect malicious apps with 99.5% accuracy, with no false positives and a high true positive rate (95.9%). Finally, system explore the ecosystem of malicious Facebook apps and identify mechanisms that these apps use to propagate. Interestingly, system find that many apps collude and support each other; in our dataset, system find 1584 apps enabling the viral propagation of 3723 other apps through their posts. Long term, system see FRAppE as a step toward creating an independent watchdog for app assessment and ranking, so as to warn Facebook users before installing apps.
Autonomous Car: The Future of Secured Transportation
Ms.Jashwanthi A.
An autonomous car is a driverless car or a robotic car, which uses techniques such as radar, lidar, GPS, odometry and computer vision to sense its environment and to navigate without human input. It uses Vehicle to Vehicle communication (V2V) protocol to communicate and share data with its surrounding vehicles but data hiding cannot be done. In this paper, we implement V2V protocol using ZigBee for privacy in data sharing. ZigBee provides secured communication, transportation of cryptographic key and helps in controlling the devices. It is developed using the basic security framework defined in IEEE 802.15.4.
Today’s, smart objects in Internet of Things (IOT) are able to detect their state and share it with other objects across the Internet, thus collaboratively making intelligent decisions on their own. Humans always find alternatives around them to carry out their work smoothly. Service provisioning in IOT should also be made capable of providing similar or alternate objects that are aligned with user requirements, current context and previous knowledge without any human intervention. With advancement of Automation technology, life is getting simpler and easier in all aspects. Now a day’s Automatic systems are being preferred over manual system. Traditional methods of household chores are replaced by automation system which is adaptable with the modern world. The manual systems are not more acceptable by the new generation peoples, so traditional methods are to be replaced. We have reported an effective implementation for Internet of Things used for monitoring regular domestic conditions by sensing system. Architecture of home automation is based on the appliances fault detection unit, kitchen safety unit, grocery monitoring unit. This smart object in Internet of Things (IOT) is able to detect their state and share it with other objects across the Internet, thus collaboratively making intelligent decisions on their own.
Mahmoud Mohamed Hamoud Kaid Muawia Mohamed Ahmed Mahmoud
Increase accuracy the recognition of objects using artificial neural network technology
Mahmoud Mohamed Hamoud Kaid Muawia Mohamed Ahmed Mahmoud
This paper aims to find offers the possibility of building system software used two techniques to identify the objects one of these techniques technique of artificial intelligence process of nerve cells called artificial neural networks applications, and other technical digital image processing technology using a torque is changing (moments Hugh), where the system will be able to forms of discrimination Engineering regular and irregular artificial neural networks users to reduce the proportion of the misidentification of objects process and integrate it online with the identification using the torque is changing techniques. The network is trained on those forms for the first time and then the output of the system by giving congruence of any of these forms at high speed and accuracy. When using each technique separately there was a mis ratio clearly and when merging two technologies with some of the proportion of wrongdoing, for objects that have been training completely absent and has to recognize these objects process meticulously
The Components that can build Flexible & Efficient Software Defined Network
Deepak Kumar Manu Sood
SDN (Software Defined Network) is a new networking approach towards current networking industry. S.D.N has attracted the researchers attention, because there is wide scope of innovation and research. The main concept behind the SDN networks is the separation of controller from data plane. This natural feature makes SDN adaptive of being flexible and scalable. We are mentioning some of the important components those are needed to make current SDN networks even better and efficient that can be managed easily and updated whenever needed, without any interruption of services. Also we have discussed how we can manage the data plane, control plane and how we can identify where fault has occurred
RF Based Localization Antenna for Wireless Communication
Vikas A
The paper presents an researchon the development of RF based type of antenna. The genetic algorithm is used to optimize the two antenna architecture.The ground plane is taken as dimension 100.5×100.5×1.5. .On the other hand metal plate have dimension 16.71×1.5×110.52 mm. The main goal of the article is a comparison of directivity and bandwidth of the proposed antennas. In our analysis, the method of moments (MoM) is used to compute the currentdistribution and directivity of the yagi antenna. Microstrip technology is used for planar collinearmonopole antenna and simulation with ground plane has been performed using Ansoft HFSS 3Dsimulator. Prototypes have been realized and measured.
MMLSL: Modelling Mobile Learning for Sign Language
Hebah H.O . Nasereddin
Using mobile technology for Sign language is really valuable, and can improve learners’ learning and communication capacity, using the mobile platform applications through the E-learning to enhance, support and facilitate teaching and learning for Deaf people. The proposed MMLSL is using portable mobile devices such as tablet PCs and smart phones through wireless transmission with special equipment that attached to the devices. The main goal of the proposed MMLSL is to integrate text, audio and video with interaction between participants. Mobile application software can educate a distance learner with up to date information with supporting materials, and other types of knowledge and communications.The proposed MMLSL convert the motion of the Sign language into to comprehensible Arabic text that the non-disabled person can understand. The previous technique communication channel links a deaf person to a non-deaf person. The reverse conversion is from voice to comprehensible Arabic text using existing programs such as (IBM via Voice) which support Arabic language. The proposed MMLSL use the Arabic Sign language that is finger spelling which indicate the Arabic letters. The proposed MMLSL has many advantages over normal learning as Helping the disabled to improve their knowledge and facilitate their needs, It motivates student towards learning, M-learning increases knowledge of the students, Because of its portability nature, mobiles are readily available for learners at all times, It allows students to learn anywhere, and anytime which in turn gives unique experience for learners , It is very helpful for learners those who are hesitate or reluctance to normal learning, It helps to focus on individual learners, and encourage them, and This type of teaching-learning method helps to develop self-confidence and self-esteem among hearing impaired children
Classifying sentiments of twitter data using Bay’s neural network
Megha Saraswat Mr. Rahul Patel
Text mining is a classical domain of research and development, in this technique the text data is used for preparing the data models for classifying similar patterns of text documents. In order to perform this classification and categorization algorithms are implemented to perform the required task. But the text patterns in different social sites and blogs are not only categorized in their subjective similarities. These texts can also be classified on the basis of author’s sentiments or the emotional aspects. Therefore the identification of emotional components and features from the social site text and classify the posts on the basis of their sentiments is respectively new domain of research and development. In this presented work the micro-blog text is classified according to the text sentiments and emotional features. Therefore the twitter micro text is used for training and testing of supervised data models. In this context a supervised hybrid classification technique is developed using the Bay’s classifier and the Back propagation neural network. The key role of the Bay’s classifier is to find the emotional components in terms of positive wordlist and negative word list. Additionally using these components the message is encoded in numerical strings. These numerical strings are further used with the BPN algorithm for performing the training and testing operations. In addition of that to improve the classification performance of text the abbreviations and similes are also recovered as the emotional components. The implementation of the proposed technique is performed using JAVA technology and for classifiers WEKA library is used. After the implementation the performance of the proposed technique is evaluated and compared with the similar classification technique. In comparative performance study the proposed model found efficient and accurate as compared to traditionally used technique.
Supriya Sinha , Pooja Sahu Monika Zade , Roshni Jambhulkar Prof. Shrikant V. Sonekar
Real Time College Bus Tracking Application for Android Smartphone
Supriya Sinha , Pooja Sahu Monika Zade , Roshni Jambhulkar Prof. Shrikant V. Sonekar
This paper proposes a Real-Time College Bus Tracking Application which runs on Android smart phones. This enables students to find out the location of the bus so that they won’t get late or won’t arrive at the stop too early. The main purpose of this application is to provide exact location of the student’s respective buses in Google Maps besides providing information like bus details, driver details, stops, contact number, routes, etc. This application may be widely used by the college students since Android smart phones have become common and affordable for all. It is a real time system as the current location of the bus is updated every moment in the form of latitude and longitude which is received by the students through their application on Google maps. The application also estimates the time required to reach a particular stop on its route. The application uses client-server technology
An Improved Document Clustering with Multiview Point Similarity /Dissimilarity measures
Monika Raghuvanshi Rahul Patel
Clustering is a technique of an unsupervised learning aimed at grouping a set of objects into a clusters, each cluster consist of objects that are similar to one another within the same clusters and are dissimilar to objects belonging to other cluster. The similarity between a pair of objects can be defined either explicitly or implicitly All clustering methods have to assume some cluster relationship among the data objects that they are applied on. Similarity between a pair of objects can be defined either explicitly or implicitly. In this we introduce a novel multiviewpoint-based similarity measure and two related clustering methods. The major difference between a traditional dissimilarity/similarity measure and ours is that the former uses only a single viewpoint, which is the origin, while the latter utilizes many different viewpoints, which are objects, assumed to not be in the same cluster with the two objects being measured. Using multiple viewpoints, more informative assessment of similarity could be achieved. Theoretical analysis and empirical study are conducted to support this claim. Two criterion functions for document clustering are proposed based on this new measure. We compare them with well-known k-mean clustering algorithms that uses a Euclidean distance measures on various document collections to verify the advantages of our proposal
Wireless sensor networks(WSNs) are consisting of multifunction sensor nodes that are small in size and communicate wirelessly over short distances. Sensor nodes incorporate properties for sensing the environment, data processing and communication with other sensors. The unique properties of WSNs increase flexibility and reduce user involvement in operational tasks such as in battlefields. WSNs present unique and different challenges compared to traditional networks. In particular, wireless sensor nodes are battery operated, often having limited energy and bandwidth available for communications. Continuous growth in the use of WSNs in sensitive applications, it becomes a requirement to provide security in WSNs. Achieving security in resource constrained WSNs is a challenging research task. In this paper, we outline what is WSN, need for security in WSN, security issues, possible attacks in WSNs, security requirements in WSN and finally security protocols used in WSN and how they are achieved security requirements.
Routing for Mobile Ad Hoc and Wireless Sensor Network on location basis
K. Rajesh
Routing protocols for mobile ad-hoc networks can be broadly classified as position-based (geographic) and topology-based. Geographic routing uses location information of nodes to route messages. Geographic routing protocols use greedy forwarding under which a node forwards a packet to a next node which is closer to the destination than itself; In this paper, we present a geographic Routing protocol, wireless senor networks, position based routing protocol and hybrid routing protocols presented in the literature recently. In which distance vector performs to study include: the distance vector combines greedy perimeter stateless protocol and ad hoc on-demand vector routing protocol. Greedy perimeter stateless routing in wireless sensor network is the routing protocol for mobile adhoc network
Today’s life there is a huge demand of storing the information which is available in newspaper as well as in a paper documents format into a computer. So there is a one simple way to store the information of the newspaper into computer system for that first we have to scan the paper then we can store as well as we can also do the some changes as per the requirement. It is based on the basically mobile quality, paper quality and quality of images. Image processing is one of the most growing field in research and technology in today’s world. Detection of text from the images and identification of characters is a challenging problem. Image content can be divided into two types: Which contain attributes like texture, shape and other is in the form of text. It is a little bit challenging task to detect the text from the captured images.
Implementation of Personalized Web Search with Privacy Protection
Dr. K. Sagar
For information retrieval, the search engine plays a key role while Users may not always experience the right/appropriate results at the beginning of the search. Such irrelevance is largely due to the enormous variety of user’s contexts and backgrounds as well as the ambiguity of texts.. However when the same query is submitted by different users most search engines return the same results regardless of who submits the query. In general each user has different information needs for his/her query. Hence there is a need of creating the personalized web search. Personalized web search (PWS), which has its effectiveness in improving the quality of various search services on the Internet. However, users’ are reluctant to disclose their private information during search. Hence it has become a major barrier for the wide proliferation of PWS. The proposal of PWS framework called UPS has adaptively generalized the profiles by queries while respecting user specified privacy requirements. The Use of MP model in this project creates the privacy element and at the same time it gives us the relevancy between the websites. This personalized web search creates the user oriented effective navigation
Review on 32-Bit IEEE 754 Complex Number Multiplier Based on FFT Architecture using BOOTH Algorithm
Ms. Anuja A. Bhat Prof. Mangesh N.Thakare
In this paper, we have shown an review of High Speed, less power and less delay 32-bit IEEE 754 Floating Point Complex Multiplier using Booth Algorithm which includes 32-bit Floating Point Adder, Subtractor and Multiplier. Multiplication is an important fundamental function in many Digital Signal Processing (DSP) applications such as convolution, Fast Fourier Transform (FFT), filtering and in microprocessors in its arithmetic and logic unit (ALU). Since multiplication dominates the execution time of most DSP algorithms, so there is a need of high speed multiplier. The main objective of this research is to reduce delay, power and to increase the speed. The coding will be done in VHDL, synthesis and simulation will be done using Xilinx ISE simulator.
Warning Time Analysis for Emergency Response in Sumbawanga City for the Repeat of Magnitude 7.2 Earthquakes of 1919 Using Proposed Community Earthquake Early Warning System
Asinta Manyele
Sumbawanga city, with population of about 90 thousand has experienced several damaging earthquakes from Kanda Fault System which is a seismically active fault in eastern side, encompassing Lake Rukwa and Lake Tanganyika basins. The magnitude 7.2 earthquake of July 1919 is one of the historical earthquakes from Kanda fault that generated damaging ground motions centered in Sumbawanga city while reaching several town/cities situated up to several kilometers from its epicenter. The historical earthquakes information of the region indicates that large earthquakes are expected in near future from the Kanda fault system. The Community Earthquake Early Warning System (CEEWS) are tools used to capture earthquakes onset time and ground motion levels in communities for emergency responses. To prepare for future emergency management of magnitude 7.2 earthquakes in Sumbawanga city, the study evaluates the warning times possible from the deployment of CEEWS in the city. Warning times calculated by simulating the event, indicates that Sumbawanga city will have approximately 8 second of warning times before the arrival of strong shaking if the processing and transmission delays are minimal. Warning times are meant to allow the appropriate emergency precautionary actions to be taken by the government officials, companies and individuals during the imminent earthquakes.
Product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to manipulate the system by opinion spamming (e.g., writing spam reviews) to promote or demote some target products. For reviews to reflect genuine user experiences and opinions, such spam reviews should be detected. The project elaborated below aims at filtering the spam reviews, by providing an effective method. Usage of MapReduce technique provided by Apache Hadoop is highly emphasized for processing reviews. In this paper, the technique used for the same is described which substantially reduces time complexity when implemented.
A Hybrid Multi Threaded Task Scheduling and Knapsack Load Balancing in Multiple Cloud Centers
C.Antony C.Chandrasekar
Heuristic and task scheduling provide better scheduling solutions for cloud computing by greatly enriching in identifying candidate solutions, ensuring performance optimization and therefore reducing the make span of task scheduling. Several researchers have put forward scheduling and load balancing algorithms for cloud computing systems. However, how to reduce the response latency while efficiently utilizing detection operator mechanisms (switching between groups while scheduling with corresponding task) and reducing communication cost still remains a challenge. In this paper, a hybrid framework called, Multithreaded Locality Task Scheduling and Knapsack Load Balancing (MLTS-KLB) is constructed. The MLTS-KLB first schedules several tasks using Multithreaded Locality Parallel Task Scheduling (MLPTS) algorithm. The MLPTS algorithm gives a definition and method of achieving group synchronization. Secondly, a Knapsack Load balancing model is constructed by extending the migration based model. Then, after formulating the scheduling problems in the MLTS-KLB and bringing forward the MLPTS algorithm based Knapsack Fair Load Balancing algorithm, the efficiency of the MLTS-KLB is validated through simulation experiments. Simulation results demonstrate that the MLTS-KLB framework significantly reduce the latency time of parallel jobs and improves the average throughput of cloud computing environment by minimizing the average task waiting time compared to the state-of-the-art works.
Optimized Chemical Bond Pattern Search Using Tree Indexing Technique
Sathya.S, Rajendran.N
Probabilistic classifiers provide outputs to interpret conditional probabilities and distribution of classes given as input sample. User use sequence patterns to search for chemical formula and chemical names. Indexing of sequence patterns involving chemical formula identifies and index appearances of certain patterns for efficient search and retrieval. However, identifying chemical formulae has been a fundamental problem with increasing presence of formulae in any sequences. This is addressed through feature subset selection and indexing method in this work, called as Chemical Structured Bond Tree-based Indexing (CSBT-I). The algorithms in CSBT-I method are analyzed for different sequence patterns improving the chemical bond indexing accuracy using Bond Tree-based Structure and Sequential feature selection algorithm than existing methods. Bond tree based structure is created as a temporary indexing structure for particular requirement and purpose of indexing, therefore reducing the tree structure computation time. After creation of tree-based structure for several sequential patterns, indexing is performed using Bond Indexed Sequence, where several sequential patterns are analyzed to improve the search performance about chemical information. Chemical information indexing using multi level index pruning with the aid of sequential feature selection algorithm identifies and selects frequent and selective chemical molecular information as features to index, therefore reducing the chemical bond indexing time. Finally, to support user provided search queries that require a match between the chemical names used as a keyword, all possible sub-formulae of formulae that appear in any sequence are indexed. This in turn prunes the indices significantly without compromising the quality of the returned results in a significant manner.
Today large amount of data is generated from various and heterogeneous sources in day to day life. There 300 million of users posts the images, messages and other type of data on Facebook per day. 3.5 billion Searches are processed by google per day. Traditional system that handles the data in megabytes and gigabytes cannot efficiently handle such a huge volume, distributed data that comes from heterogeneous sources.And such data which is huge in volume, complex, distributed, unmanageable and heterogeneous is called as Big Data. In 2004 google proposed MapReduce parallel processing model that provides the parallel processing.Whenever user hits the query, MapReduce model splits it and assigns to the parallel nodes to process that query parallely.The results evaluated by all the nodes are collected and delivered to the user. The Apache open source model called “Hadoop” adopted this MapReduce framework. This paper represent system that uses collaborative filtering on the big data that have been clustered. This way of mining and managing big data is more efficient than using another traditional systems. The main challenges of big data are storing, searching, manipulating and security. The efficient system should be able to overcome all these challenges, and those what the parameters on which this paper focuses. The use of Kmeans algorithm for clustering has been recognized by many of big data handlers.Clustering divides the big data into clusters, with the data having same characteristics on one cluster. Clustering increases accuracy and takes less time to compute the results. Clustering are techniques that can reduce the data sizes by a large factor by grouping similar services together. This paper proposes of two stages: clustering and collaborative filtering. Clustering is a pre-processing step to separate Big Data into manageable parts.
An Intuitionistic Fuzzy Topsis DSS Model with Weight Determining Methods
V.Thiagarasu R.Dharmarajan
In this paper a TOPSIS method based on the theory of intuitionistic fuzzy set (IFS) is proposed which will suitably deal with vagueness and hesitancy. A fuzzy TOPSIS decision making model using entropy weight for dealing with multiple criteria decision making (MCDM)problems under intuitionistic fuzzy environment is proposed and also weight determining methods based on Regular Increasing Monotone (RIM) method and Gaussian methodare utilised. A numerical illustration is given for the application of the proposed model.
Mining Correlation Rules for Interval - Vague Sets
P.Umasankar V. Thiagarasu
This paper discusses different classes of aggregation operators and their applications to Interval Vague Set (IVS) decision making problems.Correlation coefficient for interval vague sets is proposed and utilized in Multiple Attribute Group Decision Making (MAGDM) problems. Ordered Weighted Geometric (OWG) operators are used for the MAGDM models proposed for IVSs and the proposed correlation coefficient is used for ranking alternatives. A data mining algorithm is also utilized for reducing the number of alternatives for the final decision making process. Numerical illustrations are provided for MAGDM models for interval vague sets.
There are different methods to process duplicate detection in databases, but to process the data quickly at the same time maintaining the quality of database become difficult. In this paper PB and PSNM algorithms are presented to improve the efficiency of duplicate data detection in databases keeping the time to a shorter level.
Data Mining Techniques for the Prediction of Kidney Diseases and Treatment: A Review
Ms. Astha Ameta Ms. Kalpana Jain
There are different methods to process duplicate detection in databases, but to process the data quickly at the same time maintaining the quality of database become difficult. In this paper PB and PSNM algorithms are presented to improve the efficiency of duplicate data detection in databases keeping the time to a shorter level.
A Unified Bigdata Analysis Platform Using Hadoop Technology
Suma M.R, Sanjana P Bharath Kumar S
Big data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity it is difficult to process using on-hand database management tools or traditional data processing applications. Some techniques have been developed in recent years for processing large object data on cloud, such as Cloud Analytics. However, these techniques do not provide efficient support for parallel processing and cluster technology. Big data platforms often need to support emerging Data sources and applications while accommodating existing ones. Since different data and applications have varying requirements, multiple types of data stores (e.g. file-based and object-based), frequently co-exist in the same solution today without proper integration. Hence cross-store data access, key to effective data analytics, cannot be achieved without laborious application Re-programming, prohibitively expensive data migration, and/or costly maintenance of multiple data copies. We address this vital issue by introducing a first unified big data platform over heterogeneous storage. In particular, we present a prototype joining Apache Hadoop Map Reduce and Flume technology. A retail data analysis application using data of real Twitter application is employed to test and showcase our prototype. We have found that our prototype achieves 50% data capacity savings, eliminates data migration overhead, and offers stronger reliability and enterprise support. Through our case study, we have learned important theoretical lessons concerning performance and reliability, as well as practical ones related to platform configuration. We have also identified several potentially high-impact research directions.
An Approach for Composing Web Services through OWL
Kumudavalli. N
The semantic web promises to bring automation to the areas of web service discovery, composition and invocation. It purports to take the Web to unexplored efficiencies and provide a flexible approach for promoting all types of activities in tomorrow’s Web. In this paper, we had proposed an ontology-based framework for composition of Web services. The model is also based on an iterative and incremental scheme meant to better capture requirements in accordance with service consumers’ needs. OWL-S markup vocabularies and associated inference mechanism are used and extended as a means to bring semantics to service requests. This framework is used for exploring interesting Compositions of existing Web services. In this approach we look for similarities between Web services and this method is followed if we are unaware of specific goal for services. The framework first screens web services for composition leads based on their service operations
Performance Analysis Of Data Mining Algorithms For Breast Cancer Cell Detection Using Naïve Bayes, Logistic Regression and Decision Tree
Subrata Kumar Mandal
Breast cancer is one of the second leading causes of cancer death in women. Despite the fact that cancer is preventable and curable in primary stages, the vast number of patients are diagnosed with cancer very late. Established methods of detecting and diagnosing cancer mainly depend on skilled physicians, with the help of medical imaging, to detect certain symptoms that usually appear in the later stages of cancer. The objective of this paper is to find the smallest subset of features that can guarantee highly accurate classification of breast cancer as either benign or malignant. Then a relative study on different cancer classification approaches viz. Naïve Bayes(NB), Logistic Regression(LR), Decision Tree(DT) classifiers are conducted where the time complexity of each of the classifier is also measured. Here, Logistic Regression classifier is concluded as the best classifier with the highest accuracy as compared to the other two classifiers
A Novel Method for Determining Link Correlation and Candidate Key in VANETs
A . Emmanuel Peo Mariadas Dr. R . Madhanmohan
A Collection of optimal relaying node in an intra-street and excellent of the eventually street at the intersection are the dominant challenging issues merit to its valuable mobility and uneven selection of vehicles in social Vehicular Adhoc NETworks (VANETs). In this paper , the probability of link availability can be estimated by a standardize model which considers the stable and unstable vehicle states. In an existing work, the periodic route discovery message broadcasting increases the delay in path establishment. A new concept called the link correlation which represents the influence of different link combinations in network topology. The link correlation concept is applied to transmit a packet with less network resource amount and higher profitable put. Based on link correlation, an opportunistic routing metric is designed as the assignment guidance of a relaying node in intrastreets and it is called as Expected Transmission Cost around a multi-hop Path (ETCoP). This opportunistic routing metric can further provide use for the next street selection at an intersection. Ultimately, a street-centric opportunistic routing protocol is considered based on ETCoP (SRPE) for VANETs. Forwarding candidate selection technique based on velocity vector is contributed to recover the performance.
Genetic Approach on Descriptive Modeling of Data Mining
Divya Uppal Sakhuja, Dr.V.K.Pathak
The fast developing computer science and engineering techniques has made the information easy to capture process and store in databases. Information or data are considered as elementary variable facts. Knowledge is considered as a set of instructions, which describes how these facts can be interpreted and use[1]. Data describes the actual state of the world; however knowledge describes the structure of the world and consists of principal and laws. How to gather, store and retrieve data is considered in database. Soft computing (SC) is an evolving collection of methodologies, which aims to exploit tolerance for imprecision, uncertainty, and partial truth to achieve robustness, tractability, and low cost. SC provides an attractive opportunity to represent the ambiguity in human thinking with real life uncertainty. Soft computing has recently been playing an important role in advanced knowledge processing. An advanced learning method using a combination of perception and motion has been introduced. Emergent, self-organizing, reflective, and interactive (among human beings, environment, and artificial intelligence) knowledge processing is considered by using soft computing and by borrowing ideas from bio-information processing [2].
The fast developing computer science and engineering techniques has made the information easy to capture process and store in databases. Information or data are considered as elementary variable facts. Knowledge is considered as a set of instructions, which describes how these facts can be interpreted and use[1]. Data describes the actual state of the world; however knowledge describes the structure of the world and consists of principal and laws. How to gather, store and retrieve data is considered in database. Soft computing (SC) is an evolving collection of methodologies, which aims to exploit tolerance for imprecision, uncertainty, and partial truth to achieve robustness, tractability, and low cost. SC provides an attractive opportunity to represent the ambiguity in human thinking with real life uncertainty. Soft computing has recently been playing an important role in advanced knowledge processing. An advanced learning method using a combination of perception and motion has been introduced. Emergent, self-organizing, reflective, and interactive (among human beings, environment, and artificial intelligence) knowledge processing is considered by using soft computing and by borrowing ideas from bio-information processing [2].