Pervasive Computing Issues, Challenges and Applications
Lathies, Bhasker T,
This paper describes about the recent research topic pervasive computing which focus on the characteristics, architecture, issues and challenges. The pervasive architecture relates how the end-user interacts with the pervasive network using the middleware support. Finally it describes about the future focus for the pervasive computing through the real time applications
In the rapid growth of communication field and updating of data rate that make the task to find relevant information retrieval is more difficult in the modern world. Probably User always wants to access information from the web search log, to gain the context-based data for their required information. Many researchers worked on the process of content-based information retrieval process in the web search engine. The model addressed inability to describe relations between search terms. In this paper, present an ontological based pattern description technique is the extraction of semantic data from search log through the query processing system. An experiment shows that this model can increase a usage scenario of the person.
P.Sundaramoorthy , S.P. Santhoshkumar, B. Santhoshkumar, Dr. M. Selvam,
A STUDY ON TRADE-OFFS AMONG SERVICE EXCELLENCE ATTRIBUTES OF CLOUD COMPUTING
P.Sundaramoorthy , S.P. Santhoshkumar, B. Santhoshkumar, Dr. M. Selvam,
In this paper infrastructure as a service (IaaS), development and runtime platforms as a service (PaaS), and software and business applications as a service (SaaS). Clients not having any resources and data are guaranteed to be available and ubiquitously easy to get to by means of Web services and Web APIs “in the Cloud”. In the condition of cloud computing, this implies a significant, principle business decision whether to own and preserve a data center or outsource operations to the cloud. In Challenges and the opportunities are bond with the availability or performance of software running in the cloud, as well as privacy and data control. IT resources that is actually needed; for the service provider, better resource utilization of existing infrastructure is achieved through a multi-tenant architecture. In This paper suggest in trade-offs among service quality attributes, such as availability, distributed data consistency, service runtime presentation, and privacy in cloud computing.
Mr. Rakesh Kumar Khare, Mr. Raj Kumar Singh, Mr. Rakesh Kumar Khare, Mr. Raj Kumar Singh
A Comprehensive Paper for Performance Evaluation Between DSDV & AODV Routing Protocol
Mr. Rakesh Kumar Khare, Mr. Raj Kumar Singh, Mr. Rakesh Kumar Khare, Mr. Raj Kumar Singh
Ad-hoc networking is a concept in computer communications, which means that users want to communicate with each other form a temporary network, without any form of centralized administration. Each node participating in the network acts both as host and a router and must therefore be willing to forward packets for other nodes. For this purpose, a routing protocol is needed. An ad-hoc network has certain characteristics, which imposes new demands on the routing protocol. The most important characteristic is the dynamic topology, which is a consequence of node mobility. Nodes can change position quite frequently, which means that we need a routing protocol that quickly adapts to topology changes. The nodes in an ad-hoc network can consist of laptops and personal digital assistants and are often very limited in resources such as CPU capacity, storage capacity, battery power and bandwidth. This means that the routing protocol should try to minimize control traffic, such as periodic update messages. Instead the routing protocol should be reactive, thus only calculate routes upon receiving a specific request. In this paper we focus the DSDV and AODV routing protocol with various constraints like packet delivery ratio , end to end to delay etc.
Clarifying the clouds - Discussing cloud Computing and assistance grid architectures
Er. Roma Soni, Er. Sahil Verma, Er. Kavita,
Cloud Computing is in vogue. But what is it? Is it just the same thing as outsourcing the hosting of Web applications? Why might it be useful and to whom? How does it change the future of enterprise architectures? How might clouds form the backbone of twenty-first-century ecosystems, virtual organizations and, for a particular example, healthcare systems that are truly open, scalable, heterogeneous and capable of supporting the players/ providers both big and small? In the past, IT architectures took aim at the enterprise as their endpoint. Perhaps now we must radically raise the bar by implementing architectures capable of supporting entire ecosystems and, in so doing, enable these architectures to scale both downward to an enterprise architecture as well as upward and outward. We see cloud computing offerings today that are suitable to host enterprise architectures. But while these offerings provide clear benefit to corporations by providing capabilities complementary to what they have, the fact that they can help to elastically scale enterprise architectures should not be understood to also mean that simply scaling in this way will meet twenty-first-century computing requirements. The architecture requirements of large platforms like social networks are radically different from the requirements of a healthcare platform in which geographically and corporately distributed care providers, medical devices, patients, insurance providers, clinics, coders, and billing staff contribute information to patient charts according to care programs, quality of service and HIPAA constraints. And the requirements for both of these are very different than those that provision straight-through processing services common in the financial services industry. Clouds will have to accommodate differences in architecture requirements like those implied here, as well as those relating to characteristics we subsequently discuss. In this paper, we want to revisit autonomic computing, which defines a set of architectural characteristics to manage systems where complexity is increasing but must be managed without increasing costs or the size of the management team, where a system must be quickly adaptable to new technologies integrated to it, and where a system must be extensible from within a corporation out to the broader ecosystem and vice versa. The primary goal of autonomic computing is that “systems manage themselves according to an administrator’s goals. New components integrate effortlessly ...”i. Autonomic computing per se may have been viewed negatively in the past years — possibly due to its biological metaphor or the AI or magic-happenshere feel of most autonomic initiatives. But innovations in cloud computing in the areas of virtualization and finergrained, container-based management interfaces, as well as those in hardware and software, are demonstrating that the goals of autonomic computing can be realized to a practical degree, and that they could be useful in developing cloud architectures capable of sustaining and supporting ecosystem-scaled use. Taking an autonomic approach permits us to identify core components of an autonomic Er. Roma Soni, IJECS Volume 2 Issue 12, Dec. 2013, Page No.3360-3373 Page 3361 computing architecture that Cloud Computing instantiations have thus far placed little emphasis on. We identify technical characteristics below that must not be overlooked in future architectures, and we elaborate them more fully later in this paper: ��An architecture style (or styles) that should be used when implementing cloud-based services ��External user and access control management that enables roles and related responsibilities that serve as interface definitions that control access to and orchestrate across business functionality ��An Interaction Container that encapsulates the infrastructure services and policy management necessary to provision interactions ��An externalized policy management engine that ensures that interactions conform to regulatory, business partner, and infrastructure policy constraints ��Utility Computing capabilities necessary to manage and scale cloudoriented platforms
Companies can greatly reduce IT costs by offloading data and computation to cloud computing services. Still, many companies are reluctant to do so, mostly due to outstanding security concerns. A recent study [2] surveyed more than 500 chief executives and IT managers in 17 countries, and found that despite the potential benefits, executives “trust existing internal systems over cloud-based systems due to fear about security threats and loss of control of data and systems”. One of the most serious concerns is the possibility of confidentiality violations. Either maliciously or accidentally, cloud provider’s employees can tamper with or leak a company’s data. Such actions can severely damage the reputation or finances of a company. In order to prevent confidentiality violations, cloud services’ customers might resort to encryption. While encryption is effective in securing data before it is stored at the provider, it cannot be applied in services where data is to be computed, since the unencrypted data must reside in the memory of the host running the computation. In Infrastructure as a Service (IaaS) cloud services such as Amazon’s EC2, the provider hosts virtual machines (VMs) on behalf of its customers, who can do arbitrary computations. In these systems, anyone with privileged access to the host can read or manipulate a customer’s data. Consequently, customers cannot protect their VMs on their own. Cloud service providers are making a substantial effort to secure their systems, in order to minimize the threat of insider attacks, and reinforce the confidence of customers. For example, they protect and restrict access to the hardware facilities, adopt stringent accountability and auditing procedures, and minimize the number of staff who have access to critical components of the infrastructure [8]. Nevertheless, insiders that administer the software systems at the provider backend ultimately still possess the technical means to access customers’ VMs. Thus, there is a clear need for a technical solution that guarantees the confidentiality and integrity of computation, in a way that is verifiable by the customers of the service
Grip on the cloud and service grid technologies & some Pain points that clouds and service grids address
Er. Anup Lal Yadav, Er. Sahil Verma, Er. kavita
Cloud Computing frequently is taken to be a term that simply renames common technologies and techniques that we have come to know in IT. It may be interpreted to mean data center hosting and then subsequently dismissed without catching the improvements to hosting called utility computing that permit near realtime, policy-based control of computing resources. Or it may be interpreted to mean only data center hosting rather than understood to be the significant shift in Internet application architecture that it is. Cloud computing represents a different way to architect and remotely manage computing resources. One has only to establish an account with Microsoft or Amazon or Google to begin building and deploying application systems into a cloud. These systems can be, but certainly are not restricted to being, simplistic. They can be web applications that require only http services. They might require a relational database. They might require web service infrastructure and message queues. There might be need to interoperate with CRM or e-commerce application services, necessitating construction of a custom technology stack to deploy into the cloud if these services are not already provided there. They might require the use of new types of persistent storage that might never have to be replicated because the new storage technologies build in required reliability. They might require the remote hosting and use of custom or 3rd party software systems. And they might require the capability to programmatically increase or decrease computing resources as a function of business intelligence about resource demand using virtualization. While not all of these capabilities exist in today’s clouds, nor are all that do exist fully automated, a good portion of them can be provisioned
Ch Chakradhara Rao, Mogasala Leelarani Y Ramesh Kumar,
Cloud: Computing Services And Deployment Models
Ch Chakradhara Rao, Mogasala Leelarani Y Ramesh Kumar,
Cloud computing is associated with a new paradigm for the provision of computing infrastructure and services. It represents a shift away from computing as a product that is purchased, to computing that is delivered as a service to consumers over the Internet from large scale data centers or clouds. Clouds provide an infrastructure for easily usable, scalable, virtually accessible and adjustable IT resources that need not be owned by an entity but can be delivered as a service over the Internet. The cloud concept eliminates the need to install and run middleware and applications on users own computer by providing Infrastructure, Platform and Services to users, thus easing the tasks of software and hardware maintenance and support. A cloud computing platform dynamically provisions, configures, reconfigures, and de-provisions servers as needed. Servers in the cloud can be physical machines or virtual machines. It was found that Cloud computing is changing the way we provision hardware and software for ondemand capacity fulfillment and changing the way we develop web applications and make business decisions
Bandwidth-Oriented Motion Estimation Algorithm For Real Time Mobile Video Application
M.Vijay Shankar , D.Jahnavi, V.Yokesh,
Resource-limited systems such as mobile video applications, the available memory bandwidth is dynamically changing and are limited.This project proposes a data bandwidth-oriented motion estimation(ME) design for resource-limited mobile video applications.An integrated bandwidth rate distortion optimization framework is used.This framework predicts and allocates the appropriate data bandwidth for motion estimation under a limited bandwidth supply.The residue number system is used in these framework in order to provide low power consumption and high scalability data bandwidth oriented ME
The main objective of this paper is to introduce a new CPU algorithm called SJRR CPU Scheduling Algorithm which acts as preemptive based on the arrival time. The algorithm helps to improve the average waiting time of Round Robin algorithm in real time uni-processor-multi programming operating system. CPU Scheduling is the basis of multi-programmed operating system. The scheduler is responsible for multiplexing processes on the CPU. There are many scheduling algorithms available for a multi-programmed operating system like FCFS, SJF, Priority, Round Robin etc. The proposed algorithm is based on Round robin scheduling . In this paper, the results of the existing Round Robin algorithm is compared with the proposed algorithm.
Machine Learning: An artificial intelligence methodology
Anish Talwar, Yogesh Kumar,
The problem of learning and decision making is at the core level of argument in biological as well as artificial aspects. So scientist introduced Machine Learning as widely used concept in Artificial Intelligence. It is the concept which teaches machines to detect different patterns and to adapt to new circumstances. Machine Learning can be both experience and explanation based learning. In the field of robotics machine learning plays a vital role, it helps in taking an optimized decision for the machine which eventually increases the efficiency of the machine and more organized way of preforming a particular task. Now-a-days the concept of machine learning is used in many applications and is a core concept for intelligent systems which leads to the introduction innovative technology and more advance concepts of artificial thinking.
— Digital watermarks allow information about the owner, usage rights and more to be permanently attached to the content. Watermarking is the art and science of communicating in such a way that the presence of a message cannot be detected. Simply steganographic techniques have been in use for hundreds of years, The other major area of copyright marking, where the message to be inserted is used to assert copyright over a document can be further divided into watermarking and fingerprinting. Digital watermarking has been proposed as a solution to the problem of copyright protection of multimedia data in a networked environment. It makes possible to tightly associate to a digital document a code allowing the identification of the data creator, owner, authorized consumer, and so on
Slope Finder – A Distance Measure for DTW based Isolated Word Speech Recognition
A.Akila, Dr.E.Chandra,
Speech can be recognized by machine using many algorithms like Dynamic Time Warping, Hidden Markov Model, Artificial Neural Networks etc.,. In this paper, an overview of Dynamic Time Warping and the various distance metrics used to measure the spectral distance are discussed. A new distance metric is proposed which reduces the computational complexit
Cloud computing is a type of computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. The goal of cloud computing is to apply traditional supercomputing, or high-performance computing power, normally used by military and research facilities, to perform tens of trillions of computations per second, in consumeroriented applications such as financial portfolios, to deliver personalized information, to provide data storage or to power large, immersive computer games. To prevent data loss of client an object centered data approach has been proposed. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. Cloud Information Accountability (CIA) framework, based on the notion of information accountability.
Efficient Monitoring Of Intrusion Detection In Mobile Ad Hoc Networks Using Monitoring Based Approach
N.Kumar, M.Rameshkuar, R.Karthikeyan,
A mobile ad hoc network (MANET) is a collection of wireless devices moving in seemingly random directions and communicating with one another without the aid of an established infrastructure. To extend the reachability of a node, the other nodes in the network act as routers. Several intrusion detection techniques (IDTs) proposed for mobile ad hoc networks rely on each node passively monitoring the data forwarding by its next hop. This project presents quantitative evaluations of false positives and their impact on monitoring based intrusion detection for ad hoc networks. Experimental results show that, even for a simple three-node configuration, an actual ad hoc network suffers from high false positives; these results are validated by Markov and probabilistic models. However, this false positive problem cannot be observed by simulating the same network using popular ad hoc network simulators, such as ns-2, OPNET or Glomosim. To remedy this, a probabilistic noise generator model is implemented by using sliding window based monitoring approach. With this revised noise model, the simulated network exhibits the aggregate false positive behavior similar to that of the experimental testbed. Simulations of larger (50-node) ad hoc networks indicate that monitoring-based intrusion detection has very high false positives. These false positives can reduce the network performance or increase the overhead. In a simple monitoring-based system where no secondary and more accurate methods are used, the false positives impact the network performance in two ways: reduced throughput in normal networks without attackers and inability to mitigate the effect of attacks in networks with attackers.
9-Transistor Full Adder Design For Area Minimization
Shubha goswami, shruti dixit,
The design of a new 9-transistors full adder which is simulated at 22nm CMOS technology. The design is based on a 3-transistor XOR Gate, two 2X1 multiplexers and a CMOS inverter. The main objectives to design this circuit are low power and small size of full adder.various methods are available for making full adder using more no. of transistor designing but these captured more area on si chip. So our design required less area using small building blocks. The design performance is analysed on Microwind Layout Editor on 22nm Fabrication Technology.
Speech Enhancement through Elimination Of Impulsive Disturbance Using Log MMSE Filtering
D.Koti Reddy, T.Jayanandan, C.L Vijay Kumar,
The project presents an enhancement of the speech signal by removal of impulsive disturbance from noisy speech using log minimum mean square error filtering approach. Impulsive noise has a potential to degrade the performance and reliability of Speech signal. To enhance the speech component from impulsive disturbance we go for emphasis, signal segmentation and log MMSE filtering. In preprocessing of audio signals start with pre-emphasis refers to a system process designed to increase the magnitude of some frequencies with respect to the magnitude of other frequencies. Emphasis refers to a system process designed to increase the magnitude of some frequencies with respect to the magnitude of other frequencies in order to improve the overall signal-to-noise ratio. Then the signal samples are segmented into fixed number of frames and each frame samples are evaluated with hamming window coefficients. Mean-Square Error Log-Spectral Amplitude (MMSE), which minimizes the mean-square error of the log-spectra, is obtained as a weighted geometric mean of the gains associated with the speech signal. The performance of the filtering is measured with signal to noise ratio, Perceptual Evaluation of Speech Quality (PESQ), Correlation
Effect of Aspirated Consonants on EMG Signals Generated in Zygomaticus Muscles
Himanshu Raina Randhir Singh, Parveen Lehana,
Speech communication is one of the simplest and reliable forms of communication between humans and it is affected by the behaviour and emotions of speakers. Electrical current generated on the facial muscles during speech production are represented by one of the forms of biomedical signal known as EMG. These signals results due to contraction and relaxation of the muscles, which are controlled by the nervous system. These signals from specific facial muscles are recorded for speech recognition and system automation. EMG signals are generally recorded using small surface electrodes placed near to each other. EMG activity is frequently recorded from specific muscles and plays a prominent role in the expression of elementary emotions and speech generation. The present research paper investigates the EMG patterns generated during the utterance of the unvoiced consonants. Six subjects in the age of 20-25 years were taken (three males and three females). Thirty eight vowel-consonant-vowel (VCV) syllables in Hindi were recorded along with the corresponding facial EMG signal. For each speaker, the means of log-spectral-distances (LSD) between the EMG signal of the VCVs and the reference EMG signal were computed. Analysis of the spectrograms and LSD showed that the EMD signals generated in the muscle vary with the subject and the VCV. Hence, for automatic decoding of the EMG signals, the system should be trained using both the variants.
Seamless Handover between Bluetooth and WIFI Using Packet Content Transfer Method
Pallavi Dubey Prof. Namrata Sahayam,
Next generation wireless communications will likely rely on integrated networks consisting of multiple wireless technologies. Hybrid networks based, for instance, on systems such as Bluetooth and WiFi technologies can combine their respective advantages on coverage and data rates. In such environment, WiFi/Bluetooth should seamlessly switch from one network to another, in order to obtain improved performance or at least to maintain a continuous wireless connection. This paper proposes a new user algorithm for handover between Bluetooth and Wifi, which combines a trigger to continuously maintain the connection and another one to maximize the user throughput . Moreover, we detail the implementation of that algorithm follow packet content transfer method and session transfer among heterogeneous deployment.
HUE Preserving Color Image Enhancement without GAMUT Problem using newly proposed Algorithm
Dipte Porwal, Md. S.Alam, Anjana Porwal,
In this paper work have been suggested an effective way of tackling the gamut problem during the processing itself. It is not necessary to bring back the R, G, and B values to its bounds after the processing. The proposed algorithm does not reduce the achieved intensity by the enhancement process. The enhancement procedure suggested here is hue preserving. It generalizes the existing gray scale image enhancement techniques to color images. The processing has been done in RGB space itself and the saturation and hue values of pixels are not needed for the processing. The objective of contrast enhancement is to increase the visibility of details that may be obscured by deficient global and local lightness. The goal of color enhancement can be either to increase the colorfulness, or to increase the saturation. This method is also likely to avoid out-of-gamut or unrealizable colors.
A Strategic Capacity Planning Model: A Genetic Algorithm Approach
N. Subramanian,
This paper presents an optimization based method to overcome the demands for multi-period production capacity planning by identifying the resources for the future months. Capacity planning plays an important role in all the manufacturing companies. The demand which has been forecasted by the company is totally delivered to the customers by the perfect production planning. Here we use Genetic Algorithm as the optimization tool for finding the optimal solution to minimize the production cost for six months to maximize the profit. The production cost must be minimized as much as possible and Inventories levels should be perfectly maintained, so the profit will be automatically increased. The inventory management also plays an important role in supply chain management. Inventory optimization is the real time works for the engineers those who are in the top administrative level. So here we concentrate on the main costs like Material cost, Inventory cost, Marginal cost of backlog, Hiring and Training cost, Lay-off cost, Regular time cost, Over time cost and cost of sub-contracting for allocating it by considering the demands of six months. Here we focus on generating an optimal schedule for the production for six months by fixing forty eight variables.
An Enhanced Content Based Image Retrieval System using Color Features
R.Malini, C.Vasanthanayaki,
This paper was motivated by the desire to improve the effectiveness of retrieving images on the basis of color content by Color Averaging technique. In this paper, a combined set of methods based on color averaging technique is proposed to achieve higher retrieval efficiency and performance. Firstly, an average mean based technique with reduced feature size is proposed. Secondly, a feature extraction technique based on central tendency is proposed. The proposed CBIR techniques are tested on Wang image database and indexed image database. Results obtained are compared with the existing technique based on memory utilization and query execution time. The experimental results show that proposed technique gives the better performance in terms of higher precision and recall values with less computational complexity than the conventional techniques.
V. Lakshmi Chetana, D. Prasad, G.Krishna Chaitanya,
Color STIPs for the Live Feed
V. Lakshmi Chetana, D. Prasad, G.Krishna Chaitanya,
Interest point detection is the vital aspect of computer vision and image processing. Interest point detection is very important for image retrieval and object categorization systems from which local descriptors are calculated for image matching schemes. Earlier approaches largely ignored color aspect as interest point calculation are largely based on luminance. However the use of color increases the distinctiveness of interest points. Subsequently an approach that uses saliency-based feature selection aided with a principle component analysis-based scale selection method was developed that happens to be a light-invariant interest points detection system. This paper introduces color interest points for sparse image representation. In the domain of video-indexing framework, interest points provides more useful information when compared to static images. Since human perception system is naturally influenced by motion of objects, we propose to extend the above approach for dynamic video streams using Space-Time Interest Points (STIP) method. The method includes the process of calculating the interest points in 3D domain (i.e., for feature extraction, this means that the main 2D concepts for images are extended to 3D). STIP renders moving objects in a live feed and characterizes the specific changes in the movement of these objects. A practical implementation of the proposed system validates our claim to support live video feeds and further it can be used in domains such as Motion Tracking, Entity Detection and Naming applications that have abundance importance.
In this work, image has been decomposed on wavelet decomposition technique using different wavelet transforms with different levels of decomposition. Two different images were taken and on these images wavelet decomposition technique is implemented. The parameters of the image were calculated with respect to the original image. Peak signal to noise ratio (PSNR) and mean square error (MSE) of the decomposed images were calculated. PSNR is used to measure the difference between two images. From the several types of wavelet transforms, Daubechie (db) wavelet transforms were used to analyze the results. The value of threshold is rescaled for denoising purposes. De-noising methods based on wavelet decomposition is one of the most significant applications of wavelets
Modeling and Design of Microstrip Line Based SIW and Structural Effect on Wave Propagation Characteristics
Yasser Arfat, Sharad P. Singh,
The model of substrate integrated waveguide (SIW) has been analyzed and designed to investigate the effect of geometrical shape on propagation characteristic. Parameters that have been evaluated in this work are electric field, return losses and the transmission gain. Printed circuit board (PCB) is used as dielectrics to evaluate the results in the frequency domain of 6 to 11 GHz. FEM based method is applied to optimize the design of SIW device. The results obtained had shown that gain increases with the increase in frequency upto 9.75 GHz and correspondingly the return loss is minimum at this frequency.
Image Compression Using SVD Technique and Measurement of Quality Parameters
Rahul Samnotra, Randhir Singh, Javid Khan,
Singular value decomposition (SVD) based algorithm is designed to compress the black and white images to evaluate the effect of information content of the image. Inorder to evaluate the effect second algorithm is developed to calculate the SNR, PSNR, MSE, and RMSE value of the compressed images with respect to reference image. It is interpreted from the results that high level of compression the information content of the image is degraded to higher level
H.T. Rathoda, Bharath Rathod,T. Shivaramc, K. Sugantha Devi,
A New Approach to Automatic Generation of All Quadrilateral Mesh For Finite Element Analysis
H.T. Rathoda, Bharath Rathod,T. Shivaramc, K. Sugantha Devi,
This paper presents a new mesh generation method for a convex polygonal domain. We first decompose the convex polygon into simple sub regions in the shape of triangles. These simple regions are then triangulated to generate a fine mesh of triangular elements. We propose then an automatic triangular to quadrilateral conversion scheme. Each isolated triangle is split into three quadrilaterals according to the usual scheme, adding three vertices in the middle of the edges and a vertex at the barrycentre of the element. To preserve the mesh conformity a similar procedure is also applied to every triangle of the domain to fully discretize the given convex polygonal domain into all quadrilaterals, thus propagating uniform refinement. This simple method generates a high quality mesh whose elements confirm well to the requested shape by refining the problem domain. Examples are presented to illustrate the simplicity and efficiency of the new mesh generation method for standard and arbitrary shaped domains. We have appended MATLAB programs which incorporate the mesh generation scheme developed in this paper. These programs provide valuable output on the nodal coordinates ,element connectivity and graphic display of the all quadrilateral mesh for application to finite element analysis.
The need for the internet has increased multifold in the recent past. There are a lot of algorithms and techniques designed and develop for increasing the speed of the data over the internet. The routers which routes the packets to the destination also play an important role in the speedy and effective data transfer over the internet. The normal internet routers use the shortest path routing techniques for speeding over the internet. The shortest path algorithm is an effective algorithm always but a tedious situation arises when shortest path is already busy with the data and if there is heavy traffic in the shortest path. If we are sending our data through the same shortest path again the traffic will increase and again there is congestion and packet collision in the same path. Here we design a new technique which senses traffic in the network and sends the packets through the path which has the least traffic. This method prevents the path from getting further congested and reduces the packet collision.
A Trust System for Broadcast Communications in SCADA
Manoj , B C,
Modern industrial facilities have command and control systems. These industrial command and control systems are commonly called supervisory control and data acquisition (SCADA). In the past, SCADA system has the closed operating environment, so this system was designed without security functionality. These days, as a demand for connecting the SCADA system to the open network increases, the study of SCADA system security is an issue. A key-management scheme is essential for secure SCADA communications. In this paper, ASKMA+TS which is a more efficient scheme that decreases the computational cost for multicast communication is introduced. ASKMA+TS reduce the number of keys to be stored in a remote terminal unit and provide multicast and broadcast communications. The last session discusses the use of a communications network security device, called a trust system, to enhance supervisory control and dataacquisition (SCADA) security. The major goal of the trust system is to increase security with minimal impact on existing utility communication systems.
E-Mail Abstraction Scheme Using Collaborative Spam Detection Scheme
Vinod.S , Insozhan.N, Vimal.V.R,
Email communication is widely spread and essential nowadays. However, the threat of unsolicited junk emails, also known as spam, becomes more and more serious. The basic idea of the similarity matching schema for spam detection is to maintain a known spam database, formed by user feedback, to block subsequent near-duplicate spam. By achieving efficient similarity matching and reducing storage utilization, prior works mainly represent each email by a succinct abstraction derived from email content text. But, these abstractions of emails cannot fully catch the evolving nature of spam, and are thus not effective enough in near-duplicate detection. An email abstraction scheme is proposed, which considers email layout structure to represent emails. Procedure SAG(Structure Abstraction Generation) is presented to generate the email abstraction using HTML content in email, and this newly-devised abstraction can more effectively capture the near-duplicate phenomenon of spam. Moreover, we design a complete spam detection system which possesses an efficient near-duplicate matching scheme and a progressive update scheme. The progressive update scheme enables this system to keep the most up-to-date information for near-duplicate detection
Secured PHR Transactions using Homomorphic Encryption in Cloud Computing
Vidya.S, Vani.K,
Personal Health Records (PHRs) has emerged as a patient centric model of health information management and exchange. It stores the PHRs electronically in one centralized place in Third Party Cloud Server. It greatly facilitates the management and sharing of patient’s health information and also has serious privacy concerns about whether these service providers can be fully trusted. To facilitate quick development of cloud data storage and preserve the security assurances with outsourced PHRs, the efficient method have been planned. To ensure the patients control over their own privacy homomorphic encryption has been proposed with data auditing to authenticate the accuracy of PHRs stored in cloud server
Glass Fiber Reinforced Concrete & Its Properties Shrikant Harle, Prof. Ram Meghe
Shrikant Harle, Prof. Ram Meghe,
Glass fiber reinforced concrete (GFRC) is a recent introduction in the field of civil engineering. So, it has been extensively used in many countries since its introduction two decades ago. This product has advantage of being light weight and thereby reducing the overall cost of construction, ultimately bringing economy in construction. Steel reinforcement corrosion and structural deterioration in reinforced concrete structures are common and prompted many researchers to seek alternative materials and rehabilitation techniques. So, researchers all over the world are attempting to develop high performance concrete using glass fibers and other admixtures in the concrete up to certain extent. In the view of global sustainable scenario, it is imperative that fibers like glass, carbon, aramid and poly-propylene provide very wide improvements in tensile strength, fatigue characteristics, durability, shrinkage characteristics, impact, cavitations, erosion resistance and serviceability of concrete. The present work is only an accumulation of information about GFRC and the research work which is already carried out by other researchers.
Overcoming Cache Staleness Issue Using Dynamic Source Routing Protocol
Chaitali G. Taral, Dr.P.R.Deshmukh,
recently, there has been a growing interest in Mobile Ad Hoc Network (MANET). Ad hoc Network becomes popular since it can provide useful personal communication in certain applications such as battlefield, academic, and business without any support where no fixed infrastructure exists. All mobile Nodes communicate with each other direct or through intermediate node using any routing protocol. This paper has focused mainly on Dynamic Source Routing (DSR) protocol regarding route cache. To address the cache staleness issue which leads towards the loss of the packets during delivery of the information in DSR protocol, prior work used adaptive timeout mechanisms. Such mechanisms use heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, heuristics cannot accurately estimate timeouts and as a result of those valid routes will be removed or stale routes will be kept in caches. In this work we will propose proactively disseminating the broken link information to the nodes that have that link in their caches. It is important to inform only the nodes that have cached a broken link to avoid unnecessary overhead. Thus, when a link failure is detected, our goal is to notify all reachable nodes that have cached the link about the link failure which leads towards the minimization of overheads, latency and congestion control of the network
Semantically Driven Personalized Recommendations on Sparse Data
Kaushik, Tridev, Srivastava, Kavita,
Recommendation techniques play significant role in the design of online applications especially in e-commerce applications. Generally recommendation techniques use filtering methods. Filtering methods fall in the category of content filtering and collaborative filtering. Content filtering requires the matching of user’s profile with product features. Content filtering doesn’t take into account the similarity among users’ profiles. Another filtering method is called collaborative filtering. Recommender systems which use collaborative filtering also consider the similarity among other users’ profiles also for providing recommendations. Such kind of recommender systems compute the ratings about a product feature by considering ratings specified by users with similar profiles. Recommender systems often suffer from the problem of sparse data. If the products to be sold online have several features and all features must be rated by the users and if the product is promoted online and survey is presented to thousands of online users, it may happen that not all users participate in the survey. Even if all users participate in the survey they do not provide ratings on all features of the product. It results in several missing values in user-item matrix. This matrix is sparse in nature. If matrix with sparse data is presented as input to the recommender system, the recommender system may not work correctly on it. Therefore the missing values must be filled before the data is fed to the recommender system. In this paper we propose an approach to handle the problem of sparse data by using user profile similarities in a social net work. Each user’s profile is augmented with an additional attribute called trust. The value of trust represents the degree of trust of social network users on the given user. When a user completes a given survey for a product and he/she skips one or more ratings, then the trust value from his/her profile is retrieved to fill this value. Next this user’s friends’ list is retrieved and the rating specified by these users are also retrieved. On the basis of these rating values and the trust value, missing rating value is computed. Experimental results show that the rating values are computed with reasonably good accuracy
Vertigo, dizziness and balance-related conditions are among the most common health problems in adults. The centers for Disease Control and Prevention estimate that 7.4 Million per year visit a doctor for vertigo or dizziness. Almost everyone in their life will have a balance problem in mild or in severe. Positioning of any image throws ear, proprioceptor and eye- these will take signal to brain. If anyone mistakes leads to vertigo. Positional Vertigo may also be caused by inflammation within the inner ear (labyrinthitis or vestibular neuritis), which is characterized by the sudden onset of vertigo and may be associated with hearing loss. The severity of the problem can be identified with the knowledge based system that intern uses domain specific knowledge leads to develop an Expert system. This expert system will analyses the details of the patient through the software based questionnaire and checks the severity of the problem and suggests suitable cognitive and behavioral therapies to mitigate the problem of vertigo. The expert system also suggests suitable rehabilitation exercises and diet in considering usage, duration, precautions based on the case and the severity and develops a comprehensive response, which can be used as supplement to the doctor.
Modelling & Simulation of Three-phase Induction Motor Fed by an asymmetrically Configured Hybrid Multilevel Inverter
Hitesh Kumar Lade, Preeti Gupta Amit Shrivastava,
Multilevel inverters (MLIs) have become viable solution for high power DC-AC conversion. They are especially advantageous for induction motors because of low dv/dt stress and low common mode voltages. Quality of a multilevel waveform is enhanced by increasing the number of levels. However, for increased number of levels, component count becomes very high. In this paper an asymmetrically configured hybrid inverter is employed for a three-phase induction motor, resulting in low device count as compared to the classical topologies. MATLAB/Simulink based models are used to simulate and validate the proposed concepts. .
Obstacle Locating Capabilities of Mobile Robot Using Various Navigational Aids
Avanish Shrivastava, Prof.Mohan Awasathy,
Automatic path planning is one of the most challenging problems confronted by the autonomous robots. Generating optimal paths for autonomous robots are some of the heavily studied subjects in mobile robotics applications. The environments in which the robots operate can be structured or unstructured. In various applications, environment is detected through vision sensors like cameras. Once the information about the environment is known, then it is always convenient to build a roadmap or graph of the environment. Different algorithms exist in the field of path planning for building the roadmaps. After obtaining the convenient representation of the environment the graph search methods can be used to find the optimal paths through this roadmap. Graph search algorithms are very popular in the field of mobile robotics and finds a lot of applications in this field. This survey aims at analysis of various navigational and path planning algorithms using a mobile robot in structured environment. Along with that it involves the study of lacking behaviors of methods for efficient representation of environment. The environment is detected through camera and then a roadmap of the environment is built using some algorithms. Finally, the algorithm searches through the roadmap and finds an optimal path for robot to move from start position to goal position.
No doubt that due to advancement in wireless technologies and networks, more efficient measures has to be taken to enable the mobile users to be always connected to the best available access network depending on their requirements. For efficient delivery of services to the mobile users, the next-generation wireless networks require new mechanisms of mobility management where the location of every user is proactively determined before the service is delivered. Moreover, for designing an adaptive communication protocol, various existing mobility management schemes are to be seamlessly integrated. Efficient handoff mechanisms are essential for ensuring seamless connectivity and uninterrupted service delivery. This paper gives a brief overview of the handoff related terms and gives an overview of the proposed work. The proposed work focuses on handoff algorithm which selects a good quality access point among the available access points.
Nano Scale Simulation of GaAs based Resonant Tunneling Diode
Vivek Sharma, Raminder Preetpal Singh,
All kinds of tunneling diodes make use of the quantum mechanical tunneling. A resonant-tunneling diode (RTD) is a diode with a resonant-tunneling structure in which electrons can tunnel through some resonant states at certain energy levels. In this research article, analytic model of resonant tunnelling diode (RTD) is simulated for two different structures, i.e. single barrier (1B) RTD and double barrier (2B) RTD. Different parameters such as conduction band, VI characteristics, resonant energy and transmission coefficients are studied to evaluate the performance of these structures. Double barrier RTD shows improved results in comparison to the single barrier RTD by keeping other parameters constants in absence of electric field