The main purpose of the research is to study and understand the concepts of semantic web mining and web crawlers. Everyday tens to hundreds of millions of web information are generated and they answer tens of millions of queries. The most important thing is search engine to search the performance, quality of the results and ability to crawl, and index the web efficiently. The primary goal is to provide high quality search results over a rapidly growing World Wide Web in semantic web mining and to download the information from web crawling is needed. This research work aimed at how the information’s are extracted from the web using crawler method and studying the research areas of semantic web mining.
Ashish Kumar Sen1 Shamsher Bahadur Patel Dr. D. P. Shukla
A Data Mining Technique for Prediction of Coronary Heart Disease Using Neuro-Fuzzy Integrated Approach Two Level
Ashish Kumar Sen1 Shamsher Bahadur Patel Dr. D. P. Shukla
Cardiovascular disease remains the biggest cause of deaths worldwide and the Heart Disease Prediction at the early stage is importance. As large amount of data is generated in medical organizations (hospitals, medical centers) but as this data is not properly used. There is a wealth of hidden information present in the datasets. This unused data can be converted into useful data. For this purpose we can use different data mining techniques. In this paper, we have defined a two layered approach for identifying the disease possibility. The critical factors that are mandatory for occurrence of coronary heart disease are taken at first level and the rest one are taken at second level. This two level approach increases the performance of our work as it helps in predicting disease chances accurately. The heart disease dataset is taken from UCI machine learning repository to train the neural network and then fuzzy rules are applied to predict the chances of coronary heart disease as low, medium or critical.
N. Ishak, N. A. Ali, A. S. Ja’afar , S. H. Husin N. M. Z. Hashim,
Development of Inventory Database System Using Radio Frequency Identification
N. Ishak, N. A. Ali, A. S. Ja’afar , S. H. Husin N. M. Z. Hashim,
Inventory database system using Radio-frequency identification (RFID) is a database management system with RFID tag as a triggering input and as a security to the system. The idea is to develop a systematically database for lecturer to request stationary from office’s faculty by enhancing it with the RFID and establish a Local Area Connection (LAN) connection. The aim for this project is to enable user to have an easy access for requesting stationary from office's faculty stationary. The RFID tag that has a unique identification number will be the input/password for user/administrator to access into the system. The Microsoft SQL Server 2005 will be used as a database that stored all the information and inventory related to this system. The interface for this system will be developed using Microsoft Visual Basic 6.0 and as the interaction between the hardware and software integration. As the result, the configuration of hardware to software can be done thus enabling the user and administrator to begin using the system. As the future work, the e-mail alert reminder can be adding-up as an additional notification on approval status of the requested item in order to enhance the system.
Achieving Anonymity and Traceability in Wireless Networks
R. Manasa, Annapurna, G.Sowmya,
Anonymity provides protection for users to enjoy network services without being traced. While anonymity-related issues have been extensively studied in payment-based systems such as e-cash and peer-to-peer systems, little effort has been devoted to wireless mesh networks (WMNs). On the other hand, the network authority requires conditional anonymity such that misbehaving entities in the network remain traceable. In this paper, we propose security architecture to ensure unconditional anonymity for honest users and traceability of misbehaving users for network authorities in WMNs. The proposed architecture strives to resolve the conflicts between the anonymity and traceability objectives in addition to guaranteeing fundamental security requirements
Contrast Enhancement Image Fusion With Using Gaussian Filter
Mandeep Kaur, Navpreet Singh,
Contrast enhancement has an important role in image processing applications. Conventional contrast enhancement techniques either often fail to produce satisfactory results for a broad variety of low-contrast images, or cannot be automatically applied to different images, because their parameters must be specified manually to produce a satisfactory result for a given image. This paper proposes a new automatic method for contrast enhancement with Gaussian Filter. The basic procedure is to first group the histogram components of a low-contrast image into a proper number of bins according to a selected criterion, then redistribute these bins uniformly over the grayscale, and finally ungroup the previously grouped gray-levels. The proposed approach is abbreviated as DACE/LIF. Two main stages are involved in the DACE/LIF approach: the conventional histogram equalization (CHE) and linear image fusion (LIF). Though the CHE has the problem of over enhancement, it is noted that the details which is not obvious in the original image are generally revealed after the CHE. Interesting enough, the details shown in the original image and in the equalized image are of a kind of complementary property.it is called detail complementary property (DCP) between the original image and the equalized image. The DCP suggests that the details in the original image and the equalized image may be combined to form an image with better contrast and visual quality. In the light of DCP, details are aware and the LIF is employed to combine the original image and its equalized image by the CHE. Simulation results indicate that the enhanced image by the proposed DACE/LIF with Gaussian approach has better visual quality than that in the original image
Energy-Efficient Strategies for Cooperative Multichannel MAC Protocols
P.V.Naga Lakshmi B.Sasmitha C.Rama Krishna,
Distributed Information SHaring (DISH) is a new cooperative approach to designing multichannel MAC protocols. It aids nodes in their decision making processes by compensating for their missing information via information sharing through neighboring nodes. This approach was recently shown to significantly boost the throughput of multichannel MAC protocols. However, a critical issue for ad hoc communication devices, viz. energy efficiency, has yet to be addressed. In this paper, we address this issue by developing simple solutions that reduce the energy consumption without compromising the throughput performance and meanwhile maximize cost efficiency. We propose two energy-efficient strategies: in-situ energy conscious DISH, which uses existing nodes only, and altruistic DISH, which requires additional nodes called altruists. We compare five protocols with respect to these strategies and identify altruistic DISH to be the right choice in general: it 1) conserves 40-80 percent of energy, 2) maintains the throughput advantage, and 3) more than doubles the cost efficiency compared to protocols without this strategy. On the other hand, our study also shows that in-situ energy conscious DISH is suitable only in certain limited scenarios.
Faulty Node Detection in Multirate anypath Routing Protocol in Multi-Hop Wireless Networks
S.B.Manooj kumar, Dr.A.Kathirvel,
In wireless sensor networks consist of several sensor nodes are enveloping and forward their information to nearest neighborhood nodes. The sensors nodes are to join forces and achieve coordinate their operations, it is necessary to preserve the transmission at all times. In this case distance end to end of the intermediate node communication path can be constrained to detection of nodes, efficient communication, and latency. Therefore a faculty node of can be detect in the several network topology, effective Modern recovery methodologies schemes considered every nodes need to communicate with only those nodes that are within its communication range and is to in a parallel reposition a divide of the performer nodes to restore connectivity. Already solved this presenting multi rate any path routing and provide the proof of its optimality existing route discovery activities does not focused on failure node communication in multi rate any path routing protocol. We proposed an algorithm of Distributed Cut Detection algorithm is a distributed iterative for Detection of faulty node in multi rate any path routing for multi hop wireless networks with Reliable Communication. The connectivity is to a specially elected node has been lost, and one or more nodes to detect the occurrence of the detecting the partition independent of the size and structure of the network
Towards Automated design of Combinational Circuits Using Evolutionary Techniques
K.Sagar, Dr.S.Vathsal,
We introduce a technique, based on Evolutionary algorithms to automate and optimize the design of combinational circuits. Logic circuits are at the core of modern computing. The process of designing circuits which are efficient is thus of critical importance. By exploring the full range of possible solutions, circuits could be discovered which are superior to the best known human designs. Automated design techniques borrowed from artificial intelligence have allowed exactly that. Specifically, the application of genetic algorithms has allowed the creation of circuits which are substantially superior to the best known human designs. Systematic search is, perhaps, best exemplified by its simplest and most intuitive manifestation. This proposal expands on such previous research with a three-fold approach comprised of, A distinct optimizations for the application of genetic algorithms to design, the formulation and implementation of a systematic search technique to the problem and a comparison of the relative merits of the optimized genetic algorithm and the systematic search technique and the results also compared with Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) algorithms..
Iris Segmentation and Recognization Using Log Gabor Filter and Curvelet Transform
K.Sathiyaraja, M.Dhineshkumar, N.Thiyagarajan,
Biometric methods have been played important roles in personal recognition during last twenty years. These methods include the face recognition, finger print and iris recognition. Recently iris imaging has many applications in security systems. The aim of this paper is to design and implement a new iris recognition algorithm. In this paper, the new feature extraction methods according to log-gabor filters and curvelet transform for identifying the iris images are provided. Iris recognition is annular region between the sclera and the pupil of the human eye. In this region, there exists an extraordinary texture including many prominent features, on which the recognition is mainly relied. In the existing approach adopted the Scale invariant Feature Transform (SIFT) to extract the local feature points in both Cartesian and polar coordinate systems. Since it is very likely that many local patterns of the iris are similar, the recognition accuracy of the system based on SIFT is not as good as those of the traditional methods. A novel fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points and the nonlinear normalization model is adopted to provide more accurate position before matching. An effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, instead of Log-Gabor filters we propose curvelet transform to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. The proposed matching algorithm, which is based on the PFM method, is used to compare two sets of feature points by using the information comprising the local features and the position of each point
An Efficient Twisted Edwards-Form Elliptic Curve for Fast Secured Message Passing Interface
P.Ramyadevi, Dr. P Krishnakumari,
Information processed in a distributed cluster is shared among a cluster of distributed tasks or users by the virtue of message passing protocols (e.g., Message Passing Interface MPI) or confidential data transmitted to and from cluster computing nodes. In a public network, when a number of clusters connected to each other is increased becomes a potential threat to security applications running on the clusters. To deal with this, a Message Passing Interface (MPI) is developed to preserve security services in an unsecured network. The proposed work focuses on MPI rather than other protocols because MPI is one of the most popular communication protocols on distributed clusters. Generally AES algorithm is used for encryption/decryption and interpolation polynomial algorithm is used for key management. But, in this research work, Twisted Edwards-Form Elliptic Curve Cryptography is used. This Twisted Edwards-Form Elliptic Curve Cryptography is integrated with Message Passing Interface Chameleon version 2 (MPICH2) with standard MPI interface that becomes ESMPICH2. This approach provides better security with less overhead and fast when compared with the existing techniques. Keywords: Twisted Edwards-Form Elliptic Curve Cryptography, cluster, Message Passing Interface, High Performance.
Recommender systems is becomes popular and used in many fields for gathering the information based on the user requirements. It is mainly used to help the user for accessing the process based on the relevant information. Many framework for recommendation systems based on the different algorithms are revolve around the concept of accuracy only but other important feature such as diversity of the recommendations are unnoticed. In this paper efficient optimization technique along with the novel ranking technique is proposed for providing more diverse recommendations by satisfying the requirements recommendation features. The proposed algorithm is compared with the existing item based ranking technique and simulated with many real world data sets.
Load Balancing Parallel Routing Protocol in Mobile Ad Hoc Network
M.Selladevi, P.Krishnakumari,
The combination of link-quality variation with the broadcasting nature of wireless channels has revealed a direction in the research of wireless networking, namely, cooperative communication. Existing system, tackle the problem of opportunistic data transfer in mobile ad hoc networks using a solution called Cooperative Opportunistic Routing in Mobile Ad hoc Networks (CORMAN). It is a pure network layer scheme which can be designed atop of wireless networking equipment. Nodes inside the network use a light-weight proactive source routing protocol to see a list of intermediate nodes that the information packets need to follow to the destination. Once associate data packet is broadcast by upstream node and goes on to be received by a downstream node any on the route and it continues it suggest that from there so will reach the destination node sooner. This can be achieved through cooperative electronic communication at the link and network layers. Since a pipeline of data transportation could promote better spatial channel reuse we enhanced our system with Load Balancing Parallel Routing Protocol (LBPRP) which distributes traffic among multiple paths) sending data in parallel form as it uses all paths in the same time. Evaluation results shows that load balancing in sending data, decreased the end-to-end delay and increased the packet delivery ratio and throughput, thus the performance of CORMAN can be improved consequently.
A DISCRETE CHAOTIC ENCRYPTION ALGORITHM USING LORENZ ATTRACTOR
1Rahul Ramakrishna, Rajeswari Seshadr,
This Communication of critical information over global computer network needs encrypting the digital information before transmitting through the network. In this research article, a novel idea is proposed for a Discrete Chaotic Data encryption/decryption Algorithm using Lorentz attractors. The encryption system consists of a chaos generator, which takes the input message as the plain text and produces an independent output message known as cipher text. In the present approach the data to be transmitted is digitally modified by the chaotic system to produce the cipher text, which is sent through the transmission medium and the reverse process occurs at the reception end to decrypt the data. The essence of the present procedure is to use complex dynamics but simple mathematical descriptions and algorithms of chaotic systems for the purpose of encryption. The combination of these techniques results in a system that is extremely simple to implement, yet results cryptographically very robust. Also it will be highly resistive to differential and statistical attacks. The algorithm is independent of the data size to be encrypted and has been verified for several sets of input data consisting of all allowable characters. A preliminary test is also made to check the strength of the key and the results are very encouraging. Several analysis is been carried out to validate the strength, arbitrariness and the entropy of this newly proposed algorithm.
Multi-criteria Recommender systems for Open Authorization
A.Ravali, G. sudhakar,
Online social networks such as Twitter, Flickr, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. These online platforms allow third-party applications such as games, and productivity applications access to user online private data. Such accesses must be authorized by users at installation time. The Open Authorization protocol (OAuth) was introduced as a secure and efficient method for authorizing third-party applications without releasing a user’s access credentials but fails to provide fine-grained access control. We propose an extension to the OAuth 2.0 authorization that enables the provisioning of fine-grained authorization recommendations when granting permissions to third party applications using multi-criteria recommender system. The Recommender system utilizes application based, user-based, and category-based collaborative filtering mechanisms. Our collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. We implemented our proposed OAuth extension as a browser extension that allows users to easily configure their privacy settings at application installation time, provides recommendations on requested privacy permissions, and collects data regarding user preferences.
Conceivable applications include amended mobile web access, IP telephony, gaming services, high-definition mobile TV, video conferencing, 3D television and Cloud Computing. A 4G system provides mobile ultra-broadband Internet access, for example to laptops with USB wireless modems, to smartphones, and to other mobile devices.Network system uses the key technology of information gathering, processing and distribution. These systems should be controlled by a server. To enhance the benefits of these networks, we are introducing 4G systems. This paper proposes TD-LTE technology in 4G systemstowardsenhancing the security of 2G and 3G systems.In this paper two 4G candidate systems are commercially deployed: the Mobile WiMAXstandard and the first-release Long Term Evolution (LTE) standard are discussed with TD-LTE Technology
Risk aware and ALERT protocol for mitigating routing attacks in Mobile Ad hoc Networks
D. Francis Xavier Christopher, R.Nithya,
Mobile Ad Hoc Networks (MANET) has been highly vulnerable to attacks due to the dynamic nature of its network infrastructure. Among these attacks, the routing attacks getting more attention because it’s changing the whole topology itself and it causes more damages to MANET. The existing algorithm not provides the anonymity protection and finding the malicious node with degree of evidence from the expert knowledge and detects the important factors for each node. In proposed method the ALERT protocol is developed for overcome the existing problem. ALERT protocol is mainly providing a high anonymity protection with low cost. Using proposed protocol the network fields are dynamically partitions into zones and zones are randomly chosen from the nodes as intermediate relay nodes, which form a non-traceable by anonymous route. Particularly in every routing step, a data sender or forwarder division the network field in order to disconnected itself and the destination into two zones. In the last step, the data are broadcast to k nodes in the destination zone providing k-anonymity to the destination. ALERT is also flexible to timing attacks and intersection attacks. In addition, the experiments demonstrate the effectiveness of proposed approach with the consideration of several performance metrics.
Network Coding as an Efficient Technique of Transmitting Data
Khundrakpam Johnson Singh, Usham Sanjota Chanu,
Network coding is a phenomenon which allows each node in a network to perform some kind of computation. The data sent on a node’s output link can be some function or combination of data that arrived earlier on the node’s input links. In another, network coding is the transmission, mixing, and remixing of messages arriving at nodes inside the network, such that the transmitted data can be unmixed at their final destinations or at sink. In this paper we take multiple sources which have the capacity of transmitting the data to the next intermediate node. The process of mixing operation is carried out at the intermediate node or the router. The paper also reduces the frequency of mixing the data from different source node by maintaining a fixed random function. At last the sink decodes the mixed data from different sources and reassembles to form the original data.
OBLIGING EVINCIBLE DATA CHATTELS FOR VERIFICATION IN DISTRIBUTED CLOUD APPLICATION
B.Nagalakshmi, Mr. Ramakrishna
Obliging Evincible Data Chattels is a technique for ensuring the integrity of data in storage outsourcing. In this paper, we address the construction of an efficient Provable Data Possession scheme for distributed cloud storage to support the scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the client’s data. We present a cooperative obliging Evincible Data Chattels scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multi-prover zero-knowledge proof system. Our paper show that introduces scalability and communication overheads in comparison with non-cooperative approaches. Index Terms: multiple, cloud, services, data scalability, chattels.
Graphical Password or Graphical User Authentication as Effective Password Provider
Khundrakpam Johnson Singh, Usham Sanjota Chanu,
graphical password is an authentication system that works by having the user select from images, in a specific order, presented in a graphical user interface (GUI). For this reason, the graphical-password approach is sometimes called graphical user authentication (GUA). The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. Graphical password schemes have been proposed as a possible alternative to text-based schemes, motivated partially by the fact that humans can remember pictures better than text; psychological studies supports such assumption. Pictures are generally easier to be remembered or recognized than text. In addition, if the number of possible pictures is sufficiently large, the possible password space of a graphical password scheme may exceed that of text-based schemes and thus presumably offer better resistance to dictionary attacks. Because of these (presumed) advantages, there is a growing interest in graphical password. In addition to workstation and web log-in applications, graphical passwords have also been applied to ATM machines and mobile devices.
Paradigm shift for Project Managers in Agile Projects
Vakalanka Sai Phani Chandu Bharani Manapragada
IT Managers are always under pressure to meet deadlines and deliver timely results either in form of fully functional applications or improvements and changes in them. Despite budget slashes and economic downturns, IT companies must struggle to keep up with the pace of change and continue delivery. A possible solution to this conundrum is Agile software development methodologies which help IT companies plan and execute their projects by meeting changing scenarios head on and assuring rapid delivery while being flexible and maintaining quality.
Prevention of Selective Jamming Attacks Using Swarm intelligence Packet-Hiding Methods
R.karpagam, P.Archana,
Jamming is one of the most dangerous Pre-arranged disturbance attacks in wireless medium. This attack with wireless transmission is used as a launch pad for rising Denial-of-Service attacks on wireless networks. Usually, electronic jamming has been reportable as AN external threat model. We tend to take into account a sophisticated soul WHO is conscious of network secrets and also the implementation details of network protocols at any layer within the network stack. The soul uses his internal information for launching jam attacks within which specific messages of “high importance” are targeted. To diminish such forms of attacks, we tend to develop 3 schemes that forestall classification of transmitted packets in real time. Our schemes suppose the joint thought of crypto graphical mechanisms with PHY-layer attributes. However during this technique it doesn't prevent the important time packet classification. So, we tend to propose swarm based mostly defense technique for electronic jamming attacks in wireless sensing element networks. Swarm intelligence algorithmic program is capable to regulate amendment in topology and traffic. In channel hoping technique, the transmitter and receiver alter the channels so as to remain far away from the transmitter.
Network coding is seen as a promising technique to improve network throughput. In this paper, study two important problems in localized network coding in wireless networks, which only requires each node to know about and coordinate with one-hop neighbors. In particular, establish a condition that is both necessary and sufficient for useful coding to be possible. Content verification is an important and practical issue when network coding is employed. When random linear network coding is used, it is infeasible for the source of the content to sign all the data, and hence, the traditional “hash-and-sign” methods are no longer applicable. However, this technique is difficult to be applied to network coding designed for high calculation and message overhead. It explores this issue further by carefully analyzing different types of overhead, and proposes methods to help reducing both the computational and communications cost, and provide provable security at the same time. It show this condition is much weaker than expected, and therefore permits a change of coding schemes to suit different network conditions and application preferences. Based on the understanding establish, able to design a robust coding technique called loop coding that can improve network throughput and TCP throughput simultaneously..
Software Product Development—an Approach Using Scrum
Vakalanka Sai Phani Chandu, Bharani Manapragada
Scrum is an incremental framework or technique that helps is systematic commencement and development of various types of software products in an iterative fashion, and is a derivative of Agile Methodologies.
Harmonics Compensation of HAPF with Adaptive Fuzzy Dividing Rule
Mr.Polisetti Sudheer, \ Dr.S.Satyanaraya,
This paper deals with a hybrid active power filter with injection circuit (IHAPF). It shows great promise in reducing harmonics and improving the power factor with a relatively low capacity active power filter. This paper concluded that the stability of the IHAPF based on detection supply current is superior to that of others. To minimize the capacity of IHAPF, an adaptive fuzzy dividing frequency-control method is proposed by analysing the bode diagram, which consists of two control units: a generalized integrator control unit and fuzzy adjustor unit. The generalized integrator is used for dividing frequency integral control, while fuzzy arithmetic is used for adjusting proportional-integral coefficients timely. And the control method is generally useful and applicable to any other active filters. Compared to other IHAPF control methods, the adaptive fuzzy dividing frequency control shows the advantages of shorter response time and higher control precision. It is implemented in an IHAPF with a 100- kVA APF installed in a copper mill in Northern China. The simulation results implemented, but also very effective in reducing harmonics.
ANALYSIS OF VULNERABILITY IN INTERNET FIREWALL USING RULE BASED ALGORITHM
T.Maheswari, Mr. A. Senthil Kumar
Collaborative information systems (CISs) are deployed within a diverse array of environments that manage sensitive information. Current security mechanisms detect insider threats, but they are ill-suited to monitor systems in which users function in dynamic teams. The community anomaly detection system (CADS), an unsupervised learning framework to detect insider threats based on the access logs of collaborative environments. The framework is based on the observation that typical CIS users tend to form community structures based on the subjects accessed. CADS consist of two components: 1) relational pattern extraction, which derives community structures and 2) anomaly prediction, which leverages a statistical model to determine when users have sufficiently deviated from communities. We further extend CADS into Meta CADS to account for the semantics of subjects. Network security applications generally require the ability to perform powerful pattern matching to protect against attacks such as viruses and spam. Traditional hardware solutions are intended for firewall routers. However, the solutions in the literature for firewalls are not scalable, and they do not address the difficulty of an antivirus with an ever-larger pattern set. Related works have focused on algorithms and have even developed specialized circuits to increase the scanning speed.
The least significant bit (LSB) based approach is a popular type of steganographic algorithms in the spatial domain. The advantage of LSB is its simplicity to embed the bits of the message directly into the LSB plane of cover-image and many applications use this method. In this paper we try to give an overview of different LSB methods and there advancements
Efficient Master /Slave Computing Using Threshold Value Specification
D.Deeba, A.Senthilkumar,
Recent day’s totaling famous study of master slave computing. Master slave computing to provide potential benefits are higher availability (multiple nodes to read from) and quicker response times (when doing reads from slaves). The proposed study of master slave computing work tells a study about Parallel Discrete Event Simulation (PDES) approach, it’s used to can easily be added or removed during execution. In the traditional system, the Master slave approach compares two overheads namely, Transmission and blocking time. In the master slave system, the master can order the modules to the worker (slave), the slave executes the modules by their order of slave. The successful sourcing computation values are computed based on the threshold value, where the threshold value, key, found value, computation. In the found value is equal to the threshold value where the algorithm is runs for the first request. After finishing the response can be executed. Finally this paper shows overall performance results with the comparison of found value and the threshold value specifications to confirm the security of information in the system.
Detection of Flash Crowd Attack Based On Router Values Using Alarm Fixation
P.Sakila, ,A.Senthilkumar,
Flash crowd are unexpected, when the attackers is to simulate the traffic patterns of flash troops to fly below the radar. In such a case, use similarity based detection method analyzes of network packets that share the same designation address as one network flow. The network packets are clustered and the flows of network packets are analyzed, the information of flow of packets are stored in a database. The network packets are transmitting from the source to designation is through router. In router, authentications may be analyzed when the network packets are authenticated. If this is a chance of anonymous networks packets while enters into the router, the alarm is fixed and intimate to the router not to allow the packets within it. Majorly the flash crowds’ attacks are leads to the distributed denial of services attacks. DDos bout flow can be distinguished from ostentatious crowds by the flow correlation coefficient at edge router, the length of the sampled flow is satisfactorilyhuge and the DDos boutforte is satisfactorily strong. This algorithm examines the router value based on method Similarity based detection method stored in the database. The research work includes the concepts of flash crowd, router value and analyzes the information based on router value.
PREVENTION OF TRESPASSER ACCESSING IN NETWORKS USING TICKET GRANTING SERVICE
C.Jayalakshmi, Mr. A. Senthil Kumar
This paper addresses the problem of protecting the system from non trusted users access the policy share. Web site administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonym zing network. As a result, administrators block all known exit nodes of anonym zing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can ―blacklist‖ misbehaving users, thereby blocking users without compromising their anonymity. The interactive counterparts of group signatures are identity escrow schemes or group identification scheme with revocable anonymity. This work introduces a new provably secure group signature and a companion identity escrow scheme that are significantly more efficient than the state of the art. In its interactive, identity escrow form, our scheme is proven secure and coalition-resistant under the strong RSA and the decisional Diffie-Hellman assumptions.
In CMOS circuits, the reduction of the threshold voltage due to voltage scaling leads to increase in sub threshold leakage current and hence, static power dissipation. For the most recent CMOS feature sizes (e.g., 45nm and 65nm), leakage power dissipation has become an overriding concern for VLSI circuit designers. ITRS reports that leakage power dissipation may come to dominate total power consumption [1]. In the nanometer technology regime, power dissipation and process parameter variations have emerged as major design considerations. These problems continue to grow with leakage power becoming a dominant form of power consumption. Leakage power dissipation is projected to grow exponentially in the next decade according to the International Technology Roadmap for Semiconductors (ITRS).This directly affects portable battery operated devices such as cellular phones and PDAs since they have long idle times. Several techniques at circuit level and process level are used to efficiently minimize leakage current which lead to minimize the power loss and prolong the battery life in idle mode. A novel approach, named “Zigzag keeper,” was proposed at circuit level for the reduction of power dissipation. Zigzag keeper incorporate the traditional zigzag approach with keeper which use the sleep transistor plus two additional transistors driven by already calculated output which retain the state of the circuit during the sleep mode while maintaining the state or state retention
Power Efficient, High Performance SRAM array in 90nm CMOS Process
Ajay Kumar Singh, Mah Meng Seong,
Memory arrays are an essential building block in any digital system. This paper presents the implementation of an SRAM array to avoid the half selected column disturbance when the cell has separate write signal (data aware 9T cell). The array of different size is simulated in terms of power, delay and process variation with and without peripheral circuits and results are compared with the conventional 6T cell array. The proposed array consumes lower power compared to the 6T during read/write and hold mode. The power reduction is due to forbidden discharging at bit-lines during write operation, control of leakage current due to proper array implementation and lower voltage drop on read bit-line. The write delay is improved due to separate write signal. The read delay is larger than 6T array which can be reduced by independently optimizing the read path or using read/write multiplexer at the local bit line due to signal HD in the array. During hold mode maximum 43% power saving is achieved compared to the 6T array. The proposed array implementation shows less variation with the threshold voltage
Advanced Technique for Removal Of Salt & Pepper Noise In Images
Abhishek R , Srinivas N,
Transmission of images are overcome channels, Due to unwanted communication the salt and pepper noise is occur in images. The word salt and pepper noise is also speaks out a Impulse noise. The filtering mainly used for removal of impulse noise or salt and pepper noise for noise free images and fully recovered by minimum signal distortion also uncorrupted the images. For best solutions of removal of salt and pepper noise is a nonlinear digital filters which is based on order statistics of median filter. The Median filters are remove noisy signal and unwanted signals without damaging the corners .Median filter are operates in low densities but not in higher densities because at higher the image are blurred and damage the image. The filtering leaves the uncorrupted pixels and accepts the corrupted pixel. Median filter is applied to image unconditionally to practiced of conventional schemes for alert the intensities of remove the noisy signal from image then the results between the corrupted and uncorrupted pixels are prior to applying nonlinear filtering is highly desirable in images. The process of "Adaptive Median "filter is to identifies the noisy images or pixels and then remove the noisy pixels and replace them at same position by using the median filters or its variants, where the remaining are same or unchanged. The Adaptive median filter is best for removal of noisy pixels at low level. But at high level noise the adaptive median filter is provide a large Window size it is not to fit the pixel. The Adaptive median filter is also known as "switching" and "decision based" system. The existing system are Robust Estimation Algorithm (REA), Adaptive Median Filter (AMF), Standard Median Filter(SMF), it shows best performances at low noise level and at high noise level bad. A new Weighted Median Filter (WMF) is best for high noise level is an proposed.
Sensitive Data Storage in Wireless Devices Using AES Algorithm
Anjali Patil, Rajeshwari Goudar,
Nowadays, Mobile devices like cellular phones are widely used by most people. This rapid growth in mobile technology makes the delivery of healthcare data on mobile phones. The healthcare data is very sensitive and hence, it must be protected against unauthorized access. Mobile device have two constraints: Memory and Processing Power. AES is well-known standard algorithm for encryption. AES is a block cipher.AES provides flexibility, simplicity, easiness of implementation and high throughput.
A Comparative Study of Cloud based ERP systems with Traditional ERP and Analysis of Cloud ERP implementation
C.M. Navaneethakrishnan
Cloud based ERP system architecture provides solutions to all the difficulties encountered by conventional ERP systems. It provides flexibility to the existing ERP systems and improves overall efficiency. This paper aimed at comparing the performance traditional ERP systems with cloud base ERP architectures. The challenges before the conventional ERP implementations are analyzed. All the main aspects of an ERP systems are compared with cloud based approach. The distinct advantages of cloud ERP are explained. The difficulties in cloud architecture are also mentioned.
In recent years we are experiencing the tremendous growth in Online Social Networks (OSNs) and become a de facto portal for hundreds of millions of Internet users. Digital social interactions and information security are the means offered by these OSNs, but also raise a number of security and privacy issues. In OSNs users are restricted to access the shared data, but they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. In our paper, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We also formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. To the end, we discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.
Delay Tolerant Networks (DTNs) that are possibly composed of a vast number of devices such as smart phones of diverse capacities in terms of energy resources and buffer spaces. We introduces protocol which is a novel multi-copy routing protocol called Self Adaptive Utility-based Routing Protocol (SAURP) for DTNs. SAURP has the ability to identify potential opportunities of forwarding messages to their destinations via a novel utility function based mechanism, where a suite of environment parameters, such as wireless channel condition, nodal buffer occupancy, and encounter statistics, are jointly considered. Taking a considerably small number of transmissions, it can reroute messages around nodes experiencing high buffer occupancy, wireless interference, and congestion. Thus SAURP utility function is proved to achieve optimal performance and it is also further analyzed via a stochastic modeling approach. To verify the developed analytical model, extensive simulations are conducted and are compared the proposed SAURP and shown that SAURP can perform all the counterpart multi-copy encounter-based routing protocols. These simulations are associated with the number of recently reported encounter-based routing approaches in terms of delivery ratio, delivery delay, and the number of transmissions required for each message delivery
A Survey of Various Method to improve the throughput of Zigbee Cluster Tree Networ
Mayuri Singhal , Neeraj Mehta,.
IEEE 802.15.4 (Zigbee) is a leading technology for wireless sensor network, it is a medium access control and physical layer standard specially designed for short range wireless communication. Zigbee is used in applications that require low rate, secure networking, low complexity and power saving consumption. Generally Zigbee is typically used for environmental monitoring applications such as remote control monitoring system, health care device, home automation, electrical meter with in home displays, traffic management system etc. Routing protocol in zigbee has been classified on the basis of their route deterministic protocols such as ZRP, AODV and these protocols are designed considering inherent capacity constraint i.e. power, memory, bandwidth etc. In Zigbee Network On increasing the traffic load. The ratio of successfully data delivery is reduced, by which we not get efficient performance and throughput. For enhancing the performance and throughput of Zigbee different researcher has proposed different algorithm and methods which are summaries below in this paper.