E-VOTING is a term used to encompass the various techniques applied, through which the electorate or voters can express their intentions electronically. It entails the use of electronic voting equipment, phones, Personal Data Assistants (PDA), online voting etc. The adoption of e-governance strategy in electioneering processes (using Nigeria context- 36 states) will effectively reduce cost as well as enhancing election activities. What makes an e-voting model acceptable is its ability to properly authenticate voters and provide a secure means through which a voter can express his/her franchise. Adopting biometrics authentication is regarded as an effective method for automatically recognizing, with a high confidence, a person’s identity. This paper therefore, proposes Self-Monitoring Analysis and Reporting E-Voting Simulation Model (SMARESiM), a design model of an evoting system leveraging on Biometric Encryption (BE) viz Biometric key Binding (BKB) which is a secured strategy that entails fusing of biometrics with cryptographic schemes. The main objective of this research is to improve on the already existing E-voting models by fusing and adopting biometric and cryptographic techniques as well as using a secure transmission channel for confidential datasets of a voting process. This work develops a simulation model of an E-voting system which adopts relevant algorithms and mathematical equations with emphasis on biometric security schemes. The simulation of a prototype model of the electronic voting system would be developed using Proteus 7.6 application software (a simulation model). Relevant algorithms and flow models are presented while developing the SMARESiM with PROTEUS ISIS coded in Assembly Language. The prototype model would consist of electronic kiosk polling booths (two e-booths) that are all networked to the state electoral collection center and two state collection centers (in this model) are networked to the national electoral collection center via a VPN backbone. The proposed SMARESiM uses a Virtual Private Network (VPN) as the means of communication between the various polling booths and collection points. The VPN platform provides a fast, safe and reliable means of transmitting data over the internet. The results of validation show that the proposed model facilitates the adoption of E-governance in the developing countries.
E-VOTING is a term used to encompass the various techniques applied, through which the electorate or voters can express their intentions electronically. It entails the use of electronic voting equipment, phones, Personal Data Assistants (PDA), online voting etc. The adoption of e-governance strategy in electioneering processes (using Nigeria context- 36 states) will effectively reduce cost as well as enhancing election activities. What makes an e-voting model acceptable is its ability to properly authenticate voters and provide a secure means through which a voter can express his/her franchise. Adopting biometrics authentication is regarded as an effective method for automatically recognizing, with a high confidence, a person’s identity. This paper therefore, proposes Self-Monitoring Analysis and Reporting E-Voting Simulation Model (SMARESiM), a design model of an evoting system leveraging on Biometric Encryption (BE) viz Biometric key Binding (BKB) which is a secured strategy that entails fusing of biometrics with cryptographic schemes. The main objective of this research is to improve on the already existing E-voting models by fusing and adopting biometric and cryptographic techniques as well as using a secure transmission channel for confidential datasets of a voting process. This work develops a simulation model of an E-voting system which adopts relevant algorithms and mathematical equations with emphasis on biometric security schemes. The simulation of a prototype model of the electronic voting system would be developed using Proteus 7.6 application software (a simulation model). Relevant algorithms and flow models are presented while developing the SMARESiM with PROTEUS ISIS coded in Assembly Language. The prototype model would consist of electronic kiosk polling booths (two e-booths) that are all networked to the state electoral collection center and two state collection centers (in this model) are networked to the national electoral collection center via a VPN backbone. The proposed SMARESiM uses a Virtual Private Network (VPN) as the means of communication between the various polling booths and collection points. The VPN platform provides a fast, safe and reliable means of transmitting data over the internet. The results of validation show that the proposed model facilitates the adoption of E-governance in the developing countries.
ISCLOUD V.1.0: AN INTERACTIVE CLOUD SHOPPING CART BASED ON SOFTWARE AS A SERVICE COMPUTING MODEL WITH HYBRID CRYPTOGRAHIC ALGORITHM
Okafor KC Udeze CC Okafor CM
Online Shopping Cart Access Control presents a new security concerns for cloud computing applications in general. Contemporarily, these solutions (Online Shopping Cart) now leverages on Cloud Computing Software as a Service, Platform as a Service, and infrastructure as a Service delivery models viz: Cloud Web Hosting,Cloud Hosting, Reseller Web Hosting, Business Web Hosting, Dedicated Instances, and Web Hosting Business solutions. Shopping Carts can now be dedicated to providing customers with the most reliabile web order as well as web hosting services in real time. Candidate models like X-cart, S-cart Now by Amazon Services,Cubecart,Zen Cart, aspdotnetstorefront,WDL and Mal’s E-commerce among others leverages on the potential benefits associated with today’s e-commerce. Most e-Commerce proposals in literature lacks adequate security integration and trust, making online transactions vulnerable at large. This paper presents ISCloud V.1.0, an interactive cloud shopping cart based SaaS with Hybrid Cryptography in which a fast high-quality symmetric-key and public key encryption algorithm is used for access authentication. In context, the generated symmetric key is used for integrity encryption for the authentication access in ISCloud V.1.0 SaaS model. For service trust by customers,a Secured Socket layer Certificate authority (domain validation) will be acquired from a trust Certificate Authority at its deployment. The design methodology and service framework is detailed in the body of this paper. PhP,XAMPP Apache and MySQL is used for a prototype implementatio
A LITERAL SYNTHESIS ALGORITHM FOR HIGH PROCESS REGULATION IN ICOFLS
Okafor KC Udeze CC, Okafor CM
This paper presents an implementation algorithm for Intelligent Changeover Fuzzy Logic Switching System (ICOFLS) for domestic load management. The algorithm achieves high quality regulation through utilization of fuzzy logic controller in self-managing the three entities viz: Phase lines, Generator system and Inverter system. The MATLAB Simulink fuzzy logic blockset was used in this research for high process regulation. With mamdani fuzzy inference structure for the system algorithm developed, the fuzzy plots were generated for ICOFLS. In this context, this paper proposes a fuzzy logic control (FLC) Altera Stratix IV GX FPGA which has been designed and fabricated with 0.8μm CMOS technology for process real-time control for the ICOFLS. The high regulation factor, simplicity, flexibility and adaptability of FLC FPGA makes ICOFLS dual knowledge based viz: inference control rules implement and output scale regulation. The synthesis algorithm and experimental results of ICOFLS model show that the system demonstratesresilence, stability, cost effectiveness and environmental sustainability.
AN ECONOMETRIC INVESTIGATION ON THE IMPACT OF ICT IN THE BUSINESS PROCESSES OF FINANCIAL INSTITUTIONS IN THE DEVELOPING COUNTRIES
Udeze CC, Okafor KC, Nwafor CM Abarikwu AC
This paper presents Information and Communication Technology (ICT) as a strategic driver in the business processes of the financial institutions in Nigeria. Soundness of banks is important for the economic development of any country. In this work, an attempt was made to empirically examine the impact of key metric such as Infrastructure (ICT infrastructure), Intellectual Capital, IT Governance, ICT Service Management (ISM) Strategic Policy Framework (SPF) and Service Innovation on the business processes of the banks in Nigeria. The study was conducted with ten banks in Nigeria. The empirical results show that with the developments of the aforementioned key metrics, the soundness of the financial institutions both on microeconomics and macroeconomics scales will be enhanced. It was observed that with well developed financial institutions via ICT, the soundness of the banks will address the issues of good quality of service, good integrity system, cost reduction and profit maximization. A bivariate data model was developed through linear regression to ascertain the extent of correlation between ICT key indices and the attendant social economic index. [Keywords: ICT, Business, Process, Infrastructure, Bivariate, Model, Correlation, Indice
IWLAN: AN IMPLEMENTATION MODEL FOR HIGH DENSITY SMART INTRANET COMMUNICATIONS, (A CASE FOR ELDI)
Okafor KC Udeze CC, Nwafor CM Abarikwu AC
The need for high performance wireless network connectivity that saves cost, time and mess of wires running across offices and in providing high speed access has continued to attract attention to individuals, private and cooperate organizations in the developing countries. This paper presents a cost effective wireless hotspot model called iWLAN for Electronics Development Institute (ELDI), Awka, Nigeria, that supports multiple user sessions simultaneously. This work present an implementation model of iWLAN as well as conduct an experimental study for an enhanced distributed channel access mechanism of Linksys 300N AP used in this work. The main focus of the study is on SMART intranet traffic engineering and QoS guarantees in the iWLAN model. With the former, we aim at distributing the bandwidth in the iWLAN according to available throughput allocation criterion. With the latter, the objective is to ensure that the performance metrics (throughput and delay) experienced by a user allows for flexible data communication within the high density zones. We present our implemented test bed with Aradial RADIUS server and Cisco Linksys wireless router that support Distributed Coordination Function (DCF) functionality and analyze the test approach for iWLAN model.
THREE-PORT FULL-BRIDGE CONVERTERS WITH WIDE VOLTAGE RANGE INPUT FOR SOLAR POWER SYSTEMS
M.Ragavendran, Dr. M. Sasikumar
A systematic method for deriving three-port converters (TPCs) from the full-bridge converter (FBC) is proposed in this paper. The proposed method splits the two switching legs of the FBC into two switching cells with different sources and allows a dc bias current in the transformer. By using this systematic method, a novel full-bridge TPC (FB-FBC) is developed for renewable power system applications which feature simple topologies and control, a reduced number of devices, and single-stage power conversion between any two of the three ports. The proposed FB-TPC consists of two bidirectional ports and an isolated output port. The primary circuit of the converter functions as a buck-boost converter and provides a power flow path between the ports on the primary side. The FB-TPC can adapt to a wide source voltage range, and tight control over two of the three ports can be achieved while the third port provides the power balance in the system. Furthermore, the energy stored in the leakage inductance of the transformer is utilized to achieve zero-voltage switching for all the primary-side switches. The FB-TPC is analyzed in detail with operational principles, design considerations, and a pulse-width modulation scheme (PWM), which aims to decrease the dc bias of the transformer. Experimental results verify the feasibility and effectiveness of the developed FB-TPC. The topology generation concept is further extended, and some novel TPCs, dual-input, and multiport converters are presented.
: A REFINEMENT IN EXPLOITING DATA GATHERING USING LOCALIZAT
Parthiban P
In Wireless Sensor Networks (WSNs)data transfer is achieved through the intermediate nodes by hopping, while minimising the hop count, the utilisation of energy and time latency of an individual node are minimised. The main idea behind this paper involves data accuracy in collection of data from a sensor node to the Base Station (BS) and to reduce the energy consumption which will proportionally increase the speed. The technology in Data Gathering (DG) composes distribution of sensor nodes in an environment, Mobile Sink (MS) for collecting data from a Cluster Head (CH) among the nodes. The data from a distant sensor node reaches the CH via relay nodes then transmits to MS and to BS. The main issues behind this technology is, from which node the data is acquired is not known by BS, time latency in aggregation of data and energy consumption of node due to relaying of data. In order to overcome such issues, a new concept in networking called Localisation is implemented, where the position information about the sensor node is gathered by the BS with the help of Beacon nodes, which helps to improve efficiency in collection of data, reduces time latency and energy consumption of a sensor node, in addition a polling scheme is implemented to reduce the relaying of data from the sensor nodes. A value added scheme to minimize the error in acquired data from a sensor node to CH and to eliminate buffering problems Cache optimization is implemented in proposed work
Cryptography and encryptions, these terms are now-a-days have an unseen impact in the merging field of network and its security. In the global communal world, where faster access to precise information is the most basic need; security of confidential data during transfer from one place to another place is a major concern. There are a lot of sheltered approaches that can be applied inside the organizations premises to keep the data safe and sound. The prime goal leading the design of an encryption algorithm must provide security against unauthorized attacks. Key-oriented algorithms are very efficient but they were very bulky to manage as key handling must be done. Due to the great overhead, keyless algorithms seem an attractive option. There can be various techniques that can be used to attain secure transfer of data like firewalls, proxy servers, and steganography, data security plans against worms, viruses or denial-of-service attacks
An energy-efficient, reliable and timely data transmission is essential for wireless sensor networks (WSNs) employed in scenarios where plant information must be available for control applications. To reach a maximum efficiency, cross layer interaction is a major design paradigm to exploit the complex interaction among the layers of the protocol stack. This is challenging because latency, reliability, and energy are at odds, and resource constrained nodes support only simple algorithms. In this project, the novel protocol Breath is proposed for control applications. Breath is designed for WSNs where nodes attached to plants must transmit information via multi-hop routing to a sink. Breath ensures a desired packet delivery and delay probabilities while minimizing the energy consumption of the network. The protocol is based on randomized routing, medium access control, and duty-cycling jointly optimized for energy efficiency. The design approach relies on a constrained optimization problem, whereby the objective function is the energy consumption and the constraints are the packet reliability and delay. The challenging part is the modeling of the interactions among the layers by simple expressions of adequate accuracy, which are then used for the optimization by in-network processing. The optimal working point of the protocol is achieved by a simple algorithm, which adapts to traffic variations and channel conditions with negligible overhead. The protocol has been implemented and experimentally evaluated on a test-bed with off-the-shelf wireless sensor nodes, and it has been compared with a standard IEEE 802.15.4 solution. Analytical and experimental results show that Breath is tunable and meets reliability and delay requirements. Breath exhibits a good distribution of the working load, thus ensuring a long lifetime of the network. Therefore, Breath is a good candidate for efficient, reliable, and timely data gathering for control applications.
Information Communication Technology (ICT) is the process of exchanging information, using common protocol. As technology develops, communication protocols also evolve. How the traditional library services are now moving to mobile library information services, what type of infrastructure is required by the libraries to provide such services and what are the pros and cons of using this technology in libraries. It also explores the real life examples of the libraries that are currently providing high level services by using MT to satisfy the information needs of the pattern of communication today is changing as new technologies emerge, changing the ways people communicate and organize information The Indian educational industry is evolving. It shift from’d-learning' (distance learning) to 'e-learning' and now from 'elearning' to 'm-learning' will be the next big wave, which will reform education in India. M-learning will bring about a paradigm shift from the traditional methods of education delivery, and integrate ICT as an essential component in every days learning. Next trends are possible to develop an m-learning presence with relatively little effort. Indian libraries need to be vital to their users and to this end they have to include mobile devices as part of their strategic ideas
COUNTERMEASURE AGAINST SELECTIVE JAMMING ATTACKS BY USING PACKET-HIDING METHODS
Sathishkumar.S, Mr. Vnslssr.Murthy M.E
The open nature of the wireless medium leaves it vulnerable to intentional interference attacks, typically referred to as jamming. This intentional interference with wireless transmissions can be used as a launchpad for mounting Denial-of-Service attacks on wireless networks. Typically, jamming has been addressed under an external threat model. However, adversaries with internal knowledge of protocol specifications and network secrets can launch low-effort jamming attacks that are difficult to detect and counter. In this work, we address the problem of selective jamming attacks in wireless networks. In these attacks, the adversary is active only for a short period of time, selectively targeting messages of high importance. We illustrate the advantages of selective jamming in terms of network performance degradation and adversary effort by presenting two case studies; a selective attack on TCP and one on routing. We show that selective jamming attacks can be launched by performing real-time packet classification at the physical layer. To mitigate these attacks, we develop three schemes that prevent real-time packet classification by combining cryptographic primitives with physical-layer attributes. We analyze the security of our methods and evaluate their computational and communication overhead
HIERARCHICAL WITH STRONG AUTHENTICATION REDUCTION SCHEME IN MANETS
D.M. D.Preethi
Many existing reputation systems sharply divide the trust value into right or wrong, thus ignoring another core dimension of trust: uncertainty. As uncertainty deeply impacts a node’s anticipation of others’ behavior and decisions during interaction. Besides lacking a precise semantic, this information has abstracted away any notion of time. This approach is objective and robust. But, this approach still leaves an opportunity for elaborate attackers to launch false accusation attacks since there is no constraint on update frequency. This approach also lacks the ability to separate newcomers from misbehavers.. Uncertainty originates from information asymmetry and opportunism. It reflects whether a trustor has collected enough information from past interactions with a trustee and its confidence in that information. So in this paper the concept of uncertainty and its role in trust evaluation. It Propose a certainty-oriented reputation system. This present various proactive and reactive mobility assisted uncertainty reduction schemes.
A NOVEL APPROACH FOR MODELLING AND SIMULATION OF SENSOR LESS BLDC MOTOR
C. Mohan Krishna, Mr.Ramesh Halakurki
Brushless Direct Current (BLDC) motors are one of the motor types rapidly gaining popularity. BLDC motors are used in industries such as Appliances, Automotive, Aerospace, Consumer, Medical, Industrial Automation Equipment and Instrumentation. As the name implies, BLDC motors do not use brushes for commutation instead, they are electronically commutated. BLDC motors have many advantages over brushed DC motors and induction motors. A few of these are: Better speed versus torque characteristics, High dynamic response, High efficiency, Long operating life, Noiseless operation, Higher speed ranges. Brushless DC (BLDC) motor simulation can be simply implemented with the required control scheme using specialized simulink built-in tools and block sets such as simpower systems toolbox. But it requires powerful processor requirements, large random access memory and long simulation time. To overcome these drawbacks this paper presents a state space modeling, simulation and control of permanent magnet brushless DC motor. By reading the instantaneous position of the rotor as an output, different variables of the motor can be controlled without the need of any external sensors or position detection techniques. Simulink is utilized with the assistance of MATLAB/Simulink to give a very flexible and reliable simulation. With state space model representation, the motor performance can be analyzed for variation of motor parameters.
DATA REDUCTION BY POINT SAMPLING AND FILTER REDUNDANCY METHOD FOR LASER SCANNING.
Nitesh.S.Hirulkar Rajendra.S.Tajan
This paper examines the procedures on reduction point data acquired by laser scanning device. From the evaluated procedures, applying point sampling and filter redundancy method are very useful for both freeform and primitive shape point cloud. From the tested procedures, applying filter redundancy on sampling point data can shrink the point cloud amount with good accuracy while maintaining the quality of the measured surface. The surface model generated from this proposed reduction procedures can be used in further modeling processes (e.g. registration, meshing and segmentation) in order to generated fine 3D model and perfect CAD model. For further research, the point reduction will be tested on 3D model generated from merging the mesh polygon using the procedure in this study
FACE RECOGNITION USING EIGEN FACES AND TRANSMISSION OF HIDDEN DATA USING WATERMARKING AUTHENTICATION
Bali Manohar
This paper presents an efficient Face Recognition Using Eigen Face approach and a novel scheme of protecting the hidden transmission of face biometrics using authentication Watermarking technique. Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The paper presents a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed methodology is a connection of two stages – feature extraction using principle component analysis and recognition using the feed forward back propagation Neural Network. The proposed scheme uses watermark embedding algorithm. Compared with personal identification number codes, method in diverse application but their validity must be guaranteed. Watermarking technique provides solution to ensure the validity of biometrics; proposed scheme is composed of three parts: watermark embedding, data embedding and data extraction. One of the applications of our proposed scheme is verifying data integrity for images transferred over the internet
M.Anto benne Dr.I.Jacob Raglend Dr.C.Nagarajan4 P.Prakash
PERFORMANCE AND ANALYSIS OF BLOCKING ARTIFACTS REDUCTION USING PARALLEL DEBLOCKING FILTER FOR H.264/AVC ON LOW BITRATE CODING
M.Anto benne Dr.I.Jacob Raglend Dr.C.Nagarajan4 P.Prakash
The visual effects of blocking artifacts can be reduced by using deblocking filter. Appearance of the image is not clear while an artifact occurs. While parallelization, deblocking filter has complicated data dependencies to provide insufficient parallelization. The performance can be easily affected by the synchronization, when the number of cores increases. The load imbalance problem will be occurs. The proposed algorithm has three separate filtering strong filter, normal filter & no filtering. The vertical (flat) directions used as strong filter because of human visual system is sensitive. The horizontal directions used as smoothing filter. The deblocking filter process is divided into two parts BSC & EDF. BSC is Boundary Strength Computation; EDF is Edge Discrimination & Filtering. BS value determines the strength of the filtering, when BS value is zero it is no filtering, BS value of 4 is performed strong filtering, other values uses normal filtering. BSC uses METPMHT (Markov Empirical Transition Probability Matrix and Huffman Tree). EDF uses IPCAP (Independent Pixel Connected Area Parallelization). These two methods are used to increase the parallelization and reduce the synchronization.
CLASSIFICATION OF EXACT IDENTIFICATION OF CANCER USING EXPRESIONS OF SUPPORT VECTOR MACHINES WITH FUZZY C-MEANS CLUSTERING
A. Nirmala, G. SudhaAnanthi
Microarray technique is used in gene for classification and identification of cancer parts. As many machine learning and data mining techniques are implemented for finding the cancer using the expression of gene but those technique were not sufficient. Microarray data have a high degree of noise. The disadvantage of existing technique is that it work out with the drawbacks such as noise. Ranking of gene method overcome the problems in proposed technique. Commonly developed Gene ranking techniques would wrongly predict the rank when large database used. Inorder to overcome the drawbacks in the exisiting technique the paper proposes a technique called Score for ranking the gene .The classifier used in the proposed technique is Support Vector Machine (SVM) with Fuzzy C-Means Algorithm. The experiment is performed on data set lymphoma and the result shows the accuracy of classification is best when compared to the older method.
this paper fuzzy inference system (FIS) used to detect edges. FIS is very simple and efficient method; identify the edge without determining a threshold value. This paper is concerned with the change of fuzzy logic rules recognized method which will be capable to detect the edges of image. FIS gives single output value corresponding to multiple input values and identifying the pixel, it is an edge pixel. Fuzzy inference system in MATLAB environment has been developed which gives the output result of the input values used as membership functions
ASTUDY REPORT ON X-RAY SENSOR AMPLIFIER PCB FOR X-RAY BAGGAGE INSPECTION SYSTEM (XBIS)
P.Uday Kumar G.R.C.Kaladhara Sarma
Security is one of the key aspects for public safety. Packages and luggage should be examined for weapons, narcotics and explosive devices by security system to ensure safety at places like airports, parliament house, banks, etc. One such system is X-ray Baggage Inspection System (XBIS). An X-ray generator supplies required radiation during screening and emitted Xrays penetrate through object under examination while passing through unit. Screening of luggage is don slice by slice and hence obtained picture information is then stored, processed to an extraordinary clear, informative complete image and finally reproduced on monitor screen. X-ray TV image generated in accordance with scanning principle where item of luggage is routed past linear detector line by means of conveyor belt. This detector line consists of X-ray sensor amplifiers mounted on PCBs. This project deals with design of X-ray sensor amplifiers. At detector side of XBIS set of X-ray sensor amplifiers on PCBs are arranged in horizontal and vertical lines on another side of X-ray generator. Two sets of 16 PCBs are arranged one above other in two layers for lower and higher energy perception. Diodes in a line are scanned by incrementing the address also the object is screened line-by-line as it moves on conveyor belt. Hence obtained picture information on screen presents information of objects and its inner parts taking advantage of penetration of X-rays through the objects.
Cloud computing allows the end users to use the application without installation and access their personal files at any computer with internet access. Apart from the advantages of cloud environment, security is the major issue. Due to the distributed nature, cloud environment is an easy target for intruders looking for the possible attacks to exploit. To address the security issues in the cloud environment an Intrusion Detection System (IDS) is proposed based on the features of the mobile agent. The mobile agents are used to collect and analyze the data collected from cloud environment to identify attacks exploited by the intruders. The main objective of the proposed system is to detect the known and unknown attacks exploited by the intruders in the cloud environment.
AN OPTIMIZED APPROACH FOR RECORD DEDUPLICATION USING MBAT ALGORITHM Subi S, Thangam P
Subi S, Thangam P
Record deduplication[1] is the task of identifying, in a data storage, records that refer to the same real entity or any object in spite of spelling mistakes, typing errors, different writing styles or even different schema representations or data types. In the existing system aims at providing Unsupervised Duplication Detection method which can be used to identify and remove the duplicate records from different data storge. UDD, which for a given query, can effectively identify duplicates from the query result records of different web databases. After removing the same source duplicates, the supposed” non duplicate records from the same data storage can be used as training examples alleviating the trouble of users having to manually labeled training examples. Starting from the non duplicate reocord set, the two different classifiers, a Weighted Component Similarity Summing Classifier (WCSS) is used to knowing the duplicate records from the non duplicate record and presently a genetic programming (GP) approach to record deduplication. The approach joins several different pieces of attribute with similarity function extracted from the data content to produce a deduplication function that is able to identify whether two or more entries in a repository are replicas or not. Since record deduplication is a time taking task even for small repositories, the aim is to foster a method that finds a proper combination of the proper pieces of attribute with similarity function, thus yielding a deduplication function that maximizes performance using a small representative portion of the corresponding data for training purposes. But the optimization of result is less . The proposed system has to develop new method, modified bat algorithm for record duplication. The aim behind is to create a flexible and effective method that uses Data Mining algorithms. The system shares many similarities function with generational computation techniques such as Genetic programming approach
DESIGN A MODULE OF 256 × 16 NON-VOLATILE RAM FOR VECTORIZATION WITH CHIP AREA & WIRE LENGTH MINIMIZATION
Raj Kumar Mistri, B.S.Munda
This paper describes a design methodology of a 256 × 16 RAM using VHDL to ease the description, verification, simulation and hardware realization. The a 256 × 16 RAM has 16-bit data length. This can read and write 16-bit data. Vectorizing involves parallel access to data elements from a random access memory (RAM). However, a single memory module of conventional design can access no more than one word during each cycle of the memory clock. In this paper, a new memory organization is proposed, in which words can be formed row-wise, column-wise or diagonally at the control of an external input. The behavioral and structural representation of this design has been defined
Automatic detection of disease patterns in medical images can assist radiologists in image analysis. Automated image analysis can be assisted by incorporating into a program information and knowledge that is available to radiologists. In This paper we discuss about detection of various Skin diseases using Region of Interest Extraction using color.
The Unified Modeling Language (UML) is becoming widely used for software and database modeling, and has been accepted by the Object Management Group as a standard language for object-oriented analysis and design. In This paper We compare different UML tool and there Pros & Cons with case study
A Knowledge-based expert system use human knowledge to solve problems that normally would require human intelligence. Expert systems are designed to carry the intelligence and information found in the intellect of experts and provide this knowledge to other members of the organization for problem solving purposes. With the growing importance of human resource management and increasing size of the organizations, maintenance of employee related data and generating appropriate reports are the crucial aspects of any organization. Therefore more and more organizations are adopting computer based human resource management systems (HRMS). This paper explains the architecture, knowledge representation techniques of the knowledge-based expert system
NEUROCOMPUTATIONAL MODEL USING LEABRA FRAMEWORK FOR INFORMATION STORAGE
Singh S.
Local, error-driven and associative, biologically realistic algorithm (LEABRA) is a widely used framework to design neurocomputational models for cognitive processes. The complex structure of brain layers and interconnected neuronal units form a pattern to store specific information. In an object the information content is high at edges, corners and angles formed in between two planes. It is quoted in various research journals that the neuronal weight computation is based the high information content parts than the less variation in colour in the image. In this work we have proposed a neurocomputational model to store and retrieve the information of an object. After training the model is tested on various similar objects and it can recognise the object with some error. The model can also recognise the objects having similar in terms of number of sides and number of angles.
IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES
Ms. Dalvi Yogita Ashokrao
Conventional load balancing schemes are efficient at increasing the utilization of CPU, memory, and disk I/O resources in a Distributed environment. Most of the existing load-balancing schemes ignore network proximity and heterogeneity of nodes. Load balancing becomes more challenging as load variation is very large and the load on each server may change dramatically over time, by the time when a server is to make the load migration decision, the collected load status from other servers may no longer be valid. This will affect the accuracy, and hence the performance, of the load balancing algorithms. All the existing methods neglect the heterogeneity of nodes and contextual load balancing. In this seminar, context based load balancing and task allocation with network proximity of heterogeneous nodes will be studied.
SNR ESTIMATION FOR HIGH LEVEL MODULATION SCHEME IN OFDM SYSTEM
B.Santhosh, R.Naraiah,
It is very important to estimate the signal to noise ratio (SNR) of received signal and to transmit the signal effectively for the modern communication system. The performance of existing non-data-aided (NDA) SNR estimation methods are substantially degraded for high level modulation scheme such as M-ary amplitude and phase shift keying (APSK) or quadrature amplitude modulation (QAM). I propose a SNR estimation method which uses zero point auto-correlation of received signal per block and auto/cross-correlation of decision feedback signal in orthogonal frequency division multiplexing (OFDM) system through Rayleigh fading channel. Proposed method can be studied into two types; Type 1 can estimate SNR by zero point auto-correlation of decision feedback signal based on the second moment property. Type 2 uses both zero point auto-correlation and cross-correlation based on the fourth moment property. In block-by-block reception of OFDM system, these two SNR estimation methods can be possible for the practical implementation due to correlation based estimation method and they show more stable estimation performance than the previous SNR estimation methods.
SECURITY MODEL FOR COMPUTER NETWORK BASED ON CLUSTER COMPUTING
Satish Kumar Thalod, Ram Niwas
Network security is a complicated subject, historically only tackled by well-trained and experienced experts. Security on the Internet and on Local Area Networks is now at the forefront of computer network related issues. With the increased number of LANs and personal computers, the Internet began to create untold numbers of security risks. A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. A cluster computing security model is a scheme for specifying and enforcing security policies. This paper proposes a security model for a computer network based on cluster computing architecture by using various tools available in TCP/IP security mode
In this paper some new even graceful graphs are investigated. We prove that the graph obtained by joining two copies of even cycle Cn with path Pk and the splitting graph of K1,n admits even graceful labelling.
INTEGRATED DAYLIGHT HARVESTING AND OCCUPANCY DETECTION USING DIGITAL IMAGING
Asst Prof Shilpa GundagI Mr. Manjunath Puttead
A case study of a catastrophic failure of a web marine crankshaft and a failure Analysis under bending and torsion applied to crankshafts are presented. A microscopy (eye seen) observation showed that the crack initiation started on the fillet of the crankpin by rotary bending and the propagation was a combination of cycle bending and steady torsion. The crack front profile approximately adopts a semi-elliptical shape with some istortion due to torsion and this study is supported by a previous research work already published by the authors. The number of cycles from crack initiation to final failure of his crankshaft was achieved by recording of the main engine operation on board, taking into account the beach marks left on the fatigue crack surface. The cycles calculated by the linear fracture mechanics approaches showed that the propagation was fast which means that the level of bending stress was relatively high when compared with total cycles of an engine in service. Microstructure defects or inclusion were not observed which can conclude that the failure was probably originated by an external cause and not due to an intrinsic latent defect. Possible effects of added torsional vibrations which induce stresses are also discussed. Some causes are analyzed and reported here but the origin of the fatigue fracture was not clearly determined
CROSS PLATFORM APPLICATION DEVELOPMENT WITH COMPATIBLE GUI SOLUTION
Nithiyanantham.C, Kirubakaran.R
Evolution of Smart phones and their applications takes one of the important roles in pervasive computing environments. But the diversity of mobile platforms and their APIs increase the effort of software development approach for Smartphone applications. The cross-platform mobile development tool provides code less futures, but they cannot able to solve the device fragmentation issues. The purpose of this paper is to construct a robust architecture of smart phone application development, which should provide an optimal GUI solution (coherency) with the assistance of responsive functio
IMPLEMENTATION OF SCFT ALGORITHM TO DETECT AND SOLVE GREEDY NODES IN WIRELESS SENSOR NETWORKS
Pritima Chhillar Ms. Smita,
A Wireless Sensor Networks (WSNs) is a dynamic wireless network which consists of a network of sensor nodes, in which each node communicate and has to rely on others to relay its data packets. Since the sensor nodes are normally constrained by battery and computing resources, therefore some nodes may choose, not to cooperate by refusing to do so while still using the network to forward their packets. Greedy nodes avoid themselves from being asked to forward data packets and hence conserve the resources for their own use. The resources in WSN are limited like energy and bandwidth which motivate nodes to reduce their energy consumption. Detection these malicious nodes is a real challenge in WSNs. In this paper we propose a SelfCentered Friendship tree (SCFT) algorithm to detect and remove greedy nodes from the network. In this paper, we focus on the detection phase and tried to improve the rate of packet loss due to existence of greedy nodes. Simulation results that this algorithm is highly effective and can reliably detect and remove greedy nodes.
This research paper is about the technology used for sending unsolicited messages over Bluetooth to Bluetooth-enabled devices such as cell phones or laptop computers, called Bluejacking. Range of Bluetooth is limited and hence the range of bluejacking is also limited. It is around 10 meters for mobile phones, and about 100 meters for laptops with powerful transmitters. This technology allows phone users to send business cards anonymously using Bluetooth wireless technology. Receiver does not know who has sent the message, but it has the name and model of the phone of bluejacker. People who get to receive such messages should not add the contacts into their address book. Devices that are set in non-discoverable, hidden or invisible mode are not susceptible to bluejacking. But remember it does not hijack the recipient’s phone, as its name suggests, the phone is not under the control of bluejacker at any time. It has been noted that many people are still not aware of this technology. There are other technologies also which are very similar to it. These are Bluebug, Bluesnarfing, Blueprinting, etc. This paper discusses about Bluejacking and is written to make people aware of it, its use, its misconceptions, prevention measures and also considers its advantages, disadvantages and future prospects.
OVERVIEW ON APPROCHES OF IMAGE SEGMENTATION WITH IT’S ALGORITHM AND APPLICATIONS
Vikram M.Kakade
Image processing is part of signal processing. One of the typical operations perform on image processing is image segmentation. Segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels).Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a Set of segments that collectively cover the entire image, or a set of contours extracted from the image. each of the pixels in a region are similar with respect to some characteristic or computed property, such as colour, intensity, or texture. Due to the importance of image segmentation a number of algorithms have been Proposed but based on the image that is inputted the algorithm should be chosen to get the best results. In this paper the author gives a study of the various algorithms that are available for colour images, text and gray scale images. In this paper, I have presented various image segmentation approaches like pixel based segmentation, edge base segmentation, fixation based segmentation etc. and there are various applications if image segmentation out of which finger code generation using SPFB (singular point feature block) is explain in detail .paper p
SIMULATION BASED STUDY OF TWO REACTIVE ROUTING PROTOCOLS
Kumari Nisha Saluja Ritu
Ad hoc networking allows mobile devices to establish communication path without having any central infrastructure. There is no central infrastructure and the mobile devices moves randomly which gives rise to various kinds of problems such as routing and security. In this dissertation the problem of routing is considered. Because of highly dynamic and distributed nature of nodes routing is one of the key issues in MANETs. There are three main classes of routing protocols are proactive, reactive and hybrid.In this paper an effort has been made to compare the performance of two reactive routing protocols- AODV (Ad-hoc on-demand distance vector) and DSR (Dynamic source routing). AODV and DSR are reactive protocols where each node sends routing packets only when communication is needed. Both the protocols have different mechanisms which lead to significant performance differentials. The protocols are analyzed using four performance metrics- packet delivery ratio, throughput, routing load and end-to-end delay by varying number of nodes, pause time and simulation time
INDUSTRIAL TEMPERATURE MONITORING AND CONTROL SYSTEM THROUGH ETHERNET LAN
S.Rajesh Kumar S.Rameshkumar
This paper presents a PC based temperature monitoring and control system using virtual instrumentation, LabVIEW. Data acquisition is an important role in industry in order to ensure the quality of service. Temperature sensor measures the temperature and produce corresponding analog signal which is further processed by the microcontroller. The simulator acquires data from the microcontroller through Ethernet port. The data will be displayed on the LCD in microcontroller and PC monitor. Automation and control can be done with the help of control circuitry
IMPLEMENTATION OF REPUTATION INDEX PROTOCOL (RIP) FOR SECURE COMMUNICATION IN MANE
Kirti Patil Atish Mishra Praveen Bhanodia
In an ad hoc network, the transmission range of nodes is limited; hence nodes mutually cooperate with its neighboring nodes in order to extend the overall communication. However, along with the combination of nodes, there may be some reluctant nodes like selfish nodes and malicious nodes present in the environment. These types of nodes degrade the performance of the network. This paper, gives a solution using reputation based mechanism and credit based mechanism. Moreover it includes different strategies by which non cooperative nodes are detect, isolated and/or prevented their advantages and limitations. Also, a global reputation based scheme is proposed in this paper for the detection and isolation of malicious nodes. A cluster head is used which is responsible for reputation management of each node in the environment. Detection of selfish nodes is accomplished which are created due to nodes conserving their energy using NS2. After their detection, performance analysis of network with selfish node and the network after isolation of selfish node is carried out.
Improving The Performance of Grid Network using Energy Optimal Routing with Square Matrix in
namika , Sunita Bhardwaj
Wireless sensor networks is collection of small nodes with computation, sensing, and wireless communications capabilities. The modern networks are bi-directional, also enabling control of sensor activity. Energy awareness is an essential design issue in WSN. Routing protocols in WSNs might differ depending on the application and network architecture. In Grid network, each node is connected with neighboring nodes along one or more dimensions. Grid network, known as a toroidal network when an n-dimensional grid network is connected circularly in more than one dimension. In Proposed system , we are improving the performance of grid network using energy optimal routing with square matrix in WSN. In our work, we will use vertical and horizontal approach. In this approach ,first move one step horizontally and the one step vertically
IMPLEMENTATION OF AN ENERGY-EFFICIENT ALGORITHM TO IMPROVE THE ENERGY OF LOW ENERGY NODES IN WSN
Preeti Dahiya Ms. Sunita
A wireless sensor network consists of a number of sensors which are interlinked for performing the same function collectively or cooperatively for checking and balancing the environmental factors. Due to their small size, they have a number of limitations. The energy constraint sensor nodes in sensors networks operate on limited power resources, so it is very important to improve energy efficiency and reduce power consumption. There are many routing protocols that have been proposed to achieve this. The adaptive routing protocols are very attractive because the routing overhead is low in their case. As a result, the routes tend to have the shortest hop count and contain weak links. They degrade the performance and are susceptible to breaks. Here, an energy efficient algorithm that is intended to provide a reliable transmission environment with low energy consumption is proposed. This algorithm efficiently utilizes the energy that is available and the received signal strength of the nodes to identify the best possible route to the destination. Simulation
COMPARISON ANALYSIS OF VARIOUS TRANSITION MECHANISMS FROM IPV4 TO IPV6
Shivani Savita Monalisa
IPV4 and IPV6 are incompatible protocol that means one device with IPV4 address cannot directly communicate with the IPV6 address. Many network devices are IPV4-only, they will not communicate in IPV6-only environment and few of those can or will upgrade to IPV6. The purpose of this study is to configure the network with address allocation, router configuration with OSPF routing protocol, implementation of Dual-stack, tunnels namely Manual Tunnel, GRE-IPv4 tunnel, ISATAP and 6to4 tunnel, it allows the communication between the IPv4 and IPv6 network hosts. The entire configuration is implemented using GNS3 simulator.
A NOVEL METHOD TO DETECT BONE CANCER USING IMAGE FUSION AND EDGE DETECTIO
Nisthula P Mr. Yadhu.R.B.
Employing an efficient processing technique is considered as an essential step to improve the overall visual representation of clinical images, and as a consequence provides better diagnosis results. This paper employs an easy, fast and reliable technique to detect cancerous tissue in bone by using different image processing techniques such as contrast enhancement, edge detection and image fusion. The experimental results show, the proposed method could obtain the smooth image with edge showing the disease affected part without the spatial and spectral noise
A ROBUST METHOD FOR CLASSIFICATION OF MYOCARDIAL INFARCTION SIGNALS FROM VIDEO IMAGES USING FAST ICA AND ANFIS
Aarcha Retnan M, Mr. Yadhu.R.B.
This paper presents a simple, low-cost method for measuring multiple physiological parameters using fast ica and an intelligents system to classify myocardial infarction signal using adaptive neuro-fuzzy inference system (ANFIS) model, using a basic webcam. By applying FAST ICA algorithm for independent component analysis on the color channels in video recordings, we extract the blood volume pulse from the facial regions. Heart rate (HR), respiratory rate, and HR variability were subsequently quantified. The developed method classifies cardiac signal as normal or carrying an AtrioVentricular heart Block (AVB).
A COMPARATIVE STUDY ON OPEN SOURCE CLOUD COMPUTING FRAMEWORK
Jisha S. Manjaly1, , Jisha S.
Cloud computing is a computing technique where sharing of resources rather than personal system or servers for applications. Cloud is used as a metaphor for Web applications. Cloud computing is a web based computing such as database, server applications etc. Main goal of cloud computing is high performance computing power. Virtualization is a technique to improve computing power. This paper aim a relative study on different frameworks used in cloud computing
Cloud Computing is an emerging as well as the next generation technology. It provides different services such as SaaS, PaaS, IaaS. It is an Internet based technology where quality services are provided to users including data and software, on remote servers. In order to achieve secure cloud, there exists certain techniques such as erasure-coded data, authentication, message-digest algorithms. There are a number of algorithms and their methodologies available for achieving data security. In this paper we look at the current researchers related to data security issues like confidentiality and authentication. In particular, we will discuss how to secure data on certain servers.
Raveendra Gudodagi Vidyasagar K. N., Aravind H. S.
SEGMENTATION OF ACL IN MR IMAGES
Raveendra Gudodagi Vidyasagar K. N., Aravind H. S.
Knee joint is the largest anatomical joint within the human body which facilitates the ease of movement from one place to another. Knees are most complex and delicate joints. Knee joints are frequently injured and damaged due to articulations. The knees are among the joints most commonly affected by osteoarthritis (OA). The pathophysiology of osteoarthritis (OA), a common debilitating disease afflicting over 71 million people globally, is poorly understood and a treatment to slow, halt, or reverse the disease progression remains elusive. Among the ligaments responsible in maintaining the structural integrity of knee joint, anterior cruciate ligament (ACL) injury is most commonly diagnosed. Recent advancement in clinical imaging technology has led to wide employment of magnetic resonance imaging (MRI) in such injury assessment. In this paper, a semiautomatic ACL Segmentation program implemented in MATLAB is proposed. It takes advantage of the ACL’s unique shape and orientation within MR images to carry out the segmentation. The goal of medical image segmentation is to partition a medical image in to separate regions, usually anatomic structures that are meaningful for a specific task. In many medical applications, such as diagnosis, surgery planning, and radiation treatm
As the technology improving, the system-on-chip designs become more and more complex. For the verification of these complex SoC designs, coverage metrics and the responses to them guide the entire verification flow. As like the designs are moving towards reusable and portable environments, the verification components and verification environments should also supports reusability. Hence there is a need for standalone, pre-verified and built-in verification infrastructure, which can be easily plugged in the verification environment. The Verification Intellectual Property (Verification IP) is an integral and important component of such infrastructure, which aids designers and verification engineers in the task of validating the functionality of their design. The developed Verification IP, in this paper provides the ability to verify a design against the requirements of a standard AMBA AXI4-Lite protocol. It includes the functional coverage models to track back the verification progress. This verification IP will help engineers to quickly create verification environment and test their AXI4- Lite master and slave devices.
“FINGERPRINT MATCHING USING NEURAL NETWORK TRAINING”
Kalpna Kashyap Meenakshi Yadav
Fingerprint is widely accepted for personnel identification. In this paper Levenberg-Marquardt back propagation (LMBP) algorithm is used for training purpose because LMBP is fastest technique for complex data sets and gives better performance in such situation. Input image is trained by trainlm function for producing different result sets like performance plot, regression values, simulation network of input image , histogram graph etc
IDENTIFICATION OF OUTLIERS BY COOK’S DISTANCE IN AGRICULTURE DATASETS
T.Jagadeeswari N.Harini
Data mining play a vital role in computer field . A huge and valuable knowledge is extracted from the large collection of data. outlier detection is currently an important and active research problem in many fields and is involved in numerous applications. This paper applies minimum volume ellipsoid (MVE) with principle component analysis (PCA) extension, a powerful algorithm for detecting multivariate outliers. If the data points exceed the cut-off value, the cook’s distance is used for the outliers. The paper also compares the performance of the suggested frame work with statistical methods to demonstrate its validity through simulation and experimental applications for incident detection in the field of agriculture. Keywords: outlier detection, PCA, MVE, cook’s distance.
DESIGN AND IMPLEMENTATION OF A PIPELINED BCD MULTIPLIER USING KARATSUBA –OFMAN’S ALGORITHM
*Nanditha H G Swaminadhan Rajula
: Decimal multiplication is an important and very frequent operation in financial,commercial applications.The Dominant representation for decimal digits is the bcd encoding.The BCD multiplier serve as the key block of decimal multiplier.This paper presents the design and implementation of bcd multiplier using karatsuba Ofman’s algorithm .The results indicates that the proposed pipelined bcd multiplier processed on virtex -6 FPGA device are efficient in terms of area and delay compared to other multipliers with the same number of digits
REVIEW OF INSIDIOUS ATTACKS AND ITS PROTECTION MECHANISMS FOR WIRELESS SENSOR NETWORKS
Cosmena Mahapatra
Wireless Sensor Network (WSN) is widely researched field which encompasses various revolutionary applications in fields like traffic management, wildlife preservation last but not the least armed forces. The addition of wireless sensor networks in various fields has also increased the security treats that have to be covered while using it. The objectives of this paper is to explore the security relat
Now a days mobile applications are much more accessible due to high-speed mobile networks; third and fourth generation networks provide more than enough bandwidth for personal and professional use, with email and mobile app stores almost ubiquitous on today’s devices. By 2013, over 80% of handsets sold in mature markets will be smartphones, with a text keyboard, 3G & 4G connectivity and multimedia features. Mobile applications have become an important strategic imperative to many leading enterprises. This research develops a conceptual multi-phase framework of mobile transformations by integrating the emerging literature on mobile business and enterprise transformation with best practice approaches employed in industry. While the potential value and impact of enterprise mobility is understood, only little is known about the transformational capabilities of mobile applications. The framework provides a basis for future mobile enterprise oriented studies, enhances our understanding of mobile application opportunities, and facilitates the development of appropriate mobility strategies.
SECURE COMBINATORIAL APPROACH IN WSN USING PAIRWISE AND TRIPLE KEY DISTRIBUTION
Mrs.Prathima.G, Mrs. Bharathi.M.A
An addressing of the pair-wise and triple key establishment problems in wireless sensor networks (WSN) is been done. Several types of combinatorial designs have already been applied in key establishment. A BIBD(v, b, r, k, λ) (or t−(v, b, r, k, λ) design) can be mapped to a sensor network, where v represents the size of the key pool, b represents the maximum number of nodes that the network can support, and k represents the size of the key chain. Any pair (or t-subset) of keys occurs together uniquely in exactly λ nodes; λ = 2 and λ = 3 are used to establish unique pair-wise or triple keys. Several known constructions of designs with λ = 2, to predistribute keys in sensors are used. A new construction of a design called Strong Steiner Trade and use it for pair-wise key establishment is described. To the best of our knowledge, this is the first paper on application of trades to key distribution. Our scheme is highly resilient against node capture attacks (achieved by key refreshing) and is applicable for mobile sensor networks (as key distribution is independent on the connectivity graph), while preserving low storage, computation and communication requirements. The introduction of a novel concept of triple key distribution, in which three nodes share common keys, and discuss its application in secure forwarding, detecting malicious nodes and key management in clustered sensor networks is made. Also presents a polynomialbased and a combinatorial approach (using trades) for triple key distribution. And also extend construction to simultaneously provide pair-wise and triple key distribution scheme, and apply it to secure data aggregation.
Development & Calibration of CAN for Designing Vehicle Electronic Architecture
Rahul B. Adsul Prof. Mrs. S.D. Joshi
This literature is in the field of communication networks where different Electronic Control Units (ECUs) communicate with each other over Controller Area Network (CAN) protocol. Typically these types of CAN networks are used in automotive vehicles, plant automations, etc. This proposed method is applicable in all such applications where controller area network is used as backbone electrical architecture. This literature proposes a new method of CAN signal packing into CAN frames so that network bus-load is minimized so that more number of CAN signals can be packed and more number of ECUs can be accommodated within a CAN network. The proposed method also ensures that the age of each CAN signal is minimized and all CAN signals reach the intended receiving ECUs within their maximum allowed age. Typically network designers are forced to design and develop multiple sub-networks and network gateways to get rid of network bus-load. As the proposed method intends to minimize network bus-load, the requirement of gateways just to reduce bus-load will be avoided. The implementation of CAN messages has been a critical aspect of the ECU development process in recent years. The traditional approach generates more inconsistencies between the specification and the software coding (implementation), probably coding errors, and variable reproducibility of the implementation, depending on the ECU platform. Therefore in order to achieve an efficient development, the use of a high abstraction level of the CAN protocol is essential. The proposed method also proposes method for assignment of CAN Identifiers in each CAN frame by introducing different sub-fields within the CAN Identifiers. This will help to reduce the delay in the CAN frames due to arbitration loss in the network. The proposed methods will minimize the ECU loads network due to CAN frame reception for all ECUs in the CAN by minimizing the number of CAN frames to be received by each ECU in the network and also by enabling hardware filtering of CAN frames by receiving ECUs due to different sub-fields within the CAN Identifier.
Signal Processing in Neural Network using VLSI Implementation
S. R. Kshirsagar Prof. A. O. Vyas
Artificial intelligence is realized based on many things like mathematical equations and artificial neurons. In the proposed design, we will mainly focus on the implementation of chip layout design for Feed Forward Neural Network Architecture (NNA) in VLSI for analog signal processing applications. The analog components like Gilbert Cell Multiplier (GCM), Adders, Neuron activation Function (NAF) are used in the implementation. This neural architecture is trained using Back propagation (BP) algorithm in analog domain with new techniques of weight storage. We are using 45nm CMOS technology for layout designing and verification of proposed neural network. The proposed design of neural network will be verified for analog operations like signal amplification and frequency multiplication.
Optical communication is a form of telecommunication that uses light as carrier and optical fiber as a transmission medium. Optical communication meant for long haul communication has been intended for an eight channel Wavelength Division Multiplexing (WDM) system. Dispersion is the major effect present in the optical fiber which causes a broadening of pulses and leads to Inter Symbol Interference (ISI). The dispersion effect is compensated by using Optical Phase Conjugation (OPC) technique which utilizes Four Wave Mixing (FWM) as the non-linear degrading effect. It has been found that the apposite modulation format for the transmitter part of the optical system is the Modified Duobinary Return-to-Zero (MDRZ) when compared it with Return-to-Zero (RZ), Non-Return-to-Zero (NRZ), Carrier-Suppressed Return-to-Zero (CSRZ) and Duobinary modulation formats. Dispersion Compensation Fiber (DCF) is used in this case which compensates the dispersion present in the fiber. The feat of the intended system is scrutinized in terms of eye opening, spectrum, bit error rate, Q-value etc.
: A Binary multiplier is an integral part of the arithmetic logic unit (ALU) subsystem found in many processors. Floating Point Arithmetic is extensively used in the field of banking, tax calculation, currency conversion, and other financial areas including broadcast, musical instruments, conferencing, and professional audio. Many of these applications need to solve sparse linear systems that use fair amounts of matrix multiplication. The objective of this thesis is to design and implement single precision (32-bit) floating-point cores for multiplication. The multiplier conforms to the IEEE 754 standard for single precision. The IEEE Standard for Binary Floating Point Arithmetic (IEEE 754) is the most widely used standard for floating point computation, and is followed by many CPU and FPU implementation. The standard defines formats for representing floating point numbers (including negative zero and denormal numbers) and special values (infinites and NaNs) together with a set of floating point operation that operate on these values. It also specifies four rounding modes and five exceptions. In this thesis, I have used VERILOG as a HDL and Xilinx ISE has been synthesized on same tool. Timing and correctness properties were verified. Instead of writing Test- Benches & Test-Cases we used Wave-Form Analyzer which can give a better understanding of Signals & variables and also proved a good choice for simulation of design. In order to perform floating point multiplication a VERILOG program is realized. The fixed-point design is extended to support floating-point multiplication by adding several components including exponent generation, rounding, shifting, and exception handlin
An Integrated Framework for Efficient Implementation Of Enterprise Systems
Okonkwo Obikwelu, R Mgbeafulike Ike, J
The today’s enterprise systems usually form a system of sub-applications, each being responsible for a particular functionality. Hence, the design and maintenance of such a complex system is not a simple task. In addition, the user requirements can change and the affected parts need to be identified and evolved. Similarly, new components or even whole system may need to be integrated. Companies in order to build and sustain competitive advantage have to rely more and more on their IT systems in virtually every aspect of their business operations. In spite of this and despite their critical operational, tactic and strategic role, many new and old IT systems have either not offered what they were created for, or have failed outright. Many of these failures and inadequacies result from a poorly executed development process. The development processes used employ either inadequate development models or flawed implementation due, in part, to the lack of proper frameworks and effective collaborative mechanisms between the development and integration functions. This puts forward high requirements on information systems and in turn, raises a great challenge to software developers. In order to build and sustain such systems, an integrated framework is needed for building and implementing business applications to automate business processes and support the business functions of the enterprise. The method adopted for the design is the Iterative Design Method. An integrated architecture framework was developed as a tool for efficient implementation of enterprise system as the end objective. Comprehensive system frameworks are necessary to capture the entire complexity of such systems. The architecture framework provides the conceptual foundation that includes Business, Information, information system, infrastructure and governance and security views necessary for building and managing the integral business system and all its components and also provides an integrated description of enterprise information systems in terms of the instantiation of the architecture framework.
Cloud computing is a new way of delivering computing resources and services. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. This paper is a readings on “cloud” computing and it tries to address, related research topics, challenges ahead and possible applications.
A NEW f-IDPS APPROACH FOR THE INTRUSION DETECTION IN HIGH-SPEED NETWORKS
Anshuman Saurabh Deepti Sharad NirwaL
As the networks become faster and faster, the emerging requirement is to improve the performance of the Intrusion Detection and Prevention Systems (IDPS) to keep up with the increased network throughput. In high speed networks, it is very difficult for the IDPS to process all the packets. Since the throughput of IDPS is not improved as fast as the throughput of the switches and routers, it is necessary to develop new detection techniques other than traditional techniques. In this paper we propose a rule-based IDPS technique to detect Layer 2-4 attacks by just examining the flow data without inspecting packet payload. Our