Computer as An Essential Tool For Teaching and Learning In the Modern World: Case Study of Selected Secondary School within the Maiduguri Metropolitan Council, Borno State
Adati E. Chahari
The use of information and communication Technology (ICT) in everyday life is particularly more pronounced in teaching and learning situation. However, it is not without serious challenges to use computer in teaching and learning. Inspire of the need to use ICT in teaching and learning in modern world, the situation in Borno state is highly challenging. This study is designed to assess computer as an essential tool for teaching and learning in selected secondary schools within metropolis such as Shehu Garbai Government Secondary school, Namu Model secondary school, Ruby Springfield College Maiduguri and Lamisula Government Secondary school Maiduguri in Borno State which sample were selected for the research . The researcher adopted survey research method with the use of questionnaire. Based on the findings, it was revealed that majority of the respondents who are 114 respondents representing (57%) are females and most of them in JSS 3 and 46 respondents males representing (23%) are aware and have knowledge of the computer because e-examination was introduced to them in their curriculum in 2013/2014 and consideration were on them, knowledge and awareness on computer in the modern world is factual and majority of the tools used in learning is Wikipedia and Google and some recommendations were made; Government should provide all schools with alternative source of power to make all services available to the students and to enhance quick and easy usage of ICT facilities, Government should create seminar or program to improve teachers’ knowledge and skill in ICT and other software, encourage the need of high level learning skills to act, respond, learn and adjust to ever-changing circumstances as the world is increasingly growing complex success and prosperity will be linked to people’s ability to think, act, adapt and communicate creatively.
MapReduce is a well-known programming model and an implementation method for executing, processing and generating massive data sets. MapReduce algorithm consists of a map function that processes a key/value pair to produce a set of intermediate key/value pairs, and a reduce function which combine all these values related with the same intermediate key.
MapReduce executes in parallel itself without implementing any parallel programming model and it is the most efficient way to process unstructured data.
In this research MapReduce algorithm is implemented on a cluster based machine using Hadoop distributed file system (HDFS) in order to perform a Pattern matching algorithm for different volumes of datasets. The quantitative performance analysis of MapReduce algorithm is done for the different volumes of data on the basis of execution time and number of patterns searched.
So far relational databases are used for storing the data for the applications but now there is need to store huge amount of data to store and manage which cannot stored by relational databases. NoSQL technology over comes this problem. This research paper provides a brief introduction to NoSQL database working and comparative study between MongodB and CouchDB, Which are mostly used for big data application. The operations are performed to explore the results as distinguish between both NoSql databases. This paper shows the performance of Mongodb and CouchDB. Results proves that CouchDB is more powerful than Mongodb to load and process on big data and processing very fast as compare to Mongodb. This paper describes the functionality of Mongodb and CouchDB over the large dataset.
An Enhanced General Self-Organized Tree-Based Energy-Balance Routing Protocol (EGSTEB) for Wireless Sensor Network
Sreela E. K., Asha G.
Wireless Sensor Network (WSN) is the network consists of large number of sensor nodes. These sensor nodes collects large amount of data and transmit it to the base station. But these sensor nodes provide a limited battery power and battery replacement is not an easy method for wireless sensor network. Energy efficient routing protocols provide longer network life time by minimizing total energy consumption and balance the network load. A General Self-Organized Tree-Based Energy-Balance Routing protocol (GSTEB) is the one of the energy efficient routing protocol, which discusses two extreme cases of data fusion that is each sensor node transmits same amount of data and different amount of data. For both cases it builds a routing tree using a process where, for each round, Base Station (BS) assigns a root node and broadcasts this selection to all sensor nodes. Each node selects its parent by considering itself and its neighbor’s information. GSTEB provide an efficient energy balanced routing and longer lifetime for the network, when each sensor node transmits same amount of data or different amount of data. Although GSTEB protocol achieves it has some problems such as difficult to distribute the load evenly on all nodes in tree structure and overhead in the base station. The proposed method (EGSTEB) solves these problems by overhead and energy consumption of sensor nodes.
Study of Grid and Cluster based Network in Wireless Sensor Network
Honey Soni, Himanshu Soni
Grid approach in wireless sensor network is a key technique for extending the life time of a network by reducing energy consumption. This also increases network’s capacity, reduces the routing overhead and makes it more scalable when nodes are in large number.Clustering in wireless adhoc network is a key technique for reducing the energy consumption which increases the network lifetime, capacity. Scalability of network is increase by increasing the no of nodes. In section two there are no. In this paper we will study about grid based network and cluster based network and discuses about which one is more efficient to make energy consumption more considerable.
Designing a vehicle tracking system which can work effective, accurate and reliable is emerging in many areas. Such system is commonly used GPS technology to determine location of the vehicle. It can be used to track a vehicle or fleet of vehicles and get information related to current location of the vehicles. High required operating cost of the vehicle tracking systems, including hardware and software requirements, usually, prevent these systems from being widespread applicable. In this paper a vehicle tracking system based on GPS and GPRS is proposed. The location of the vehicle is retrieved using embedded GPS sensor. A modified coding method is used to encode and compress location data before it is sent to offer cost effective usage of network traffic. The privacy of the transmitted data is guaranteed using a simple security mechanism. The encoded and encrypted location data then send to tracking server using GPRS technology. The authorized user can track a vehicle using a secure web interface.
The cloud computing has witnessed drastic evolution in last few years with respect to its development and application. Thus, research and development in cloud computing are scaling in a positive direction, due to the growing demand for cloud computing but no one is bothered about its impact on mental health. This paper provides the results after thorough analysis and data collection how cloud based services are acting like slow poison for mental health of youths. Youths expressed concern about the security, access to, and privacy, of their data in the cloud. The results shows that the female youth are become short tempered ever since they are using cloud based websites than male. The user have no idea about data storage in cloud technology. So end user always worry about privacy and security of data. When user use cloud base services, he/she has some mental pressure due to unknown place of data storage. Regular use of cloud base services user feel anxiety
Yaduru Shilpa Pg Student , Dr. P. Abdul Khayum Professor
A Novel Approach For Medical Image Fusion Based On NSCT Theory
Yaduru Shilpa Pg Student , Dr. P. Abdul Khayum Professor
In medical application the multimodal image fusion is an important tool. It has developed with the advent of various imaging modalities in medical imaging. It is mostly used for the medical diagnosis. In this proposal a novel fusion technique is proposed for the multi modal medical image based on the NSCT (non-sub sampled Contourlet transform). In this algorithm high frequency and low frequency components are fused by using fusion rules. In this project the two different fusion rules based on phase congruency and ‘averaging’ method is proposed. Finally the fused image is reconstructed by using the inverse NSCT with all coefficients. Simulation results show the proposed frame work provides the better and effective analysis of multimodal fusion. Simulation is done using MATLAB.
Organizations are becoming more reliant than ever on data to run their business. But as the amount of data grows, policies and approaches for ensuring the safety and confidentiality of that information are falling behind. Companies need a more comprehensive approach to data privacy and protection, one that closes the gaps between business strategy, risk management, compliance reporting and IT security. A company’s approach to data protection and privacy should be more than legally compliant—it should be a core part of both the organization’s business value proposition and its culture. It should also be global in scope. All employees must understand this “culture of caring” and that they are accountable for safeguarding information. And as organizations innovate around new business models and technology to gain or maintain competitive edge, they must be equally aggressive in innovating around the data security issues that these advancements introduce. Therefore researchers have tried to draw the attention on how to prevent from data breaching in our nearby environment and to assist agencies and organizations to respond effectively to data breaches. [1]
Identification of User Search Behavior through Task Trail Clustering
Hima G., Jasila E.K.
Web log is a pouch of valuable information that records users search queries and related actions on the internet. By mining the recorded information, it is possible to exploit the users underlying goals, interests and search behaviors. In order to mine information from web logs, the web logs should be segmented into sessions or tasks by clustering the queries. In this work, Task Trail is introduced to understand user search behaviors. A Task can be defined as set of semantically relevant queries issued to satisfy an atomic user information need. A task trail represents all user activities within the particular task, such as query reformulations, URL clicks. In most of the previous works, web search logs have been studied mainly at session or query level where users may submit several queries within one task and handle several tasks within one session. Although previous studies have addressed the problem of task identification, little is known about the advantage of using task over session or query for search applications. Instead of analyzing Session Trails or Query Trails, Task Trails can be analysed to determine the user search behaviour much more efficiently. By separating different task trails from a session, it can be used in several search applications such as determining user satisfaction, predicting user search interests, and suggesting related queries.
In the digital world, digital media is having a great impact and protection on such digital content is becoming necessary nowadays. Various image file formats are available for digital media. The digital content is often encrypted for security purpose. The encryption will change the content into unreadable format. Then the digital content can also be watermarked for tamper detection or for copyright protection. Encryption is often used but once it is decrypted it is not protected. So watermark is another level of security for digital content. Watermark remains also after decryption to identify owner of the digital content. This paper provides a survey on various ways to provide copyright protection for media content
An Enhancement on EASR Method for Sink Relocation in Wireless Sensor Network
Bahjasra A. , Asha G.
In a wireless sensor network (WSN), how to conserve the limited power resources of sensors to extend the network lifetime of the WSN as long as possible while performing the sensing and sensed data reporting tasks, is the most critical issue in the network design. In a WSN, sensor nodes deliver sensed data back to the sink via multi hopping. The sensor nodes near the sink will generally consume more battery power than others; consequently ,these nodes will quickly drain out their battery energy and shorten the network lifetime of the WSN. Sink relocation is an efficient network lifetime extension method, which avoids consuming too much battery energy for a specific group of sensor nodes. Energy Aware Sink Relocation (EASR) is a sink relocation mechanism for mobile sinks in WSNs. The mechanism uses information related to the residual battery energy of sensor nodes to adaptively adjust the transmission range of sensor nodes and the relocating scheme for the sink. The EASR scheme mainly focuses on when the sink will be triggered to perform the relocation process and where to move to. Here routing is based on remaining energy of the sensor nodes in the path. To achieve this type of routing, here used Maximum Capacity Path (MCP) Algorithm. Sink Relocation mechanism consists of two parts. The first is to determine whether to trigger the sink relocation by determining whether a relocation condition is met or not. The second part is to determine which direction the sink is heading in and the relocation distance as well. By adding clustering to the topology of the EASR scheme the delay in the transmission can be reduced. Also the neighbouring nodes of the sink are not always be busy. So the network lifetime can also be increased.
Transmission of data through internet has become very common now a days so, it is important to have secure communication over internet.Cryptography and Steganography are two important methods for providing secure communication. Cryptography converts the message into some gibberish form and Steganography hides the message into some other media file that can be text, image, audio, video etc. In this paper we have explained the concept of cryptography and steganography and have compared them .It also focuses on how combination of cryptography and steganography enhances security
Traction is a physical process in which a tangential force is transmitted across an interface between two bodies through dry friction or an intervening fluid film resulting in motion, stoppage or the transmission of power. Traction system is classified into electric and non-electric. Electric traction satisfies most of the requirements of an ideal traction system which are not met by the use of non-electric traction. Electric traction consists of both AC and DC supply systems. Generally traction motors used are Dc series motors.
Railways are the world's second-largest railway. Safe transportation of passengers is the key business objective of any transportation system. Railways are recognized as the safest mode of mass transportation and Safety has been recognized as the key issue for the railways and one of its special attributes. ACD is ‘self-acting’ microprocessor-based data communication equipment patented by KRCL that works ‘round-the-clock’. Different variants of ACDs when installed on Locomotives (along with their Auto-Braking Units), Guard Vans/SLRs, Stations and at Level Crossings (both manned as well as un-manned), form an ‘ACD Network’.
A 16-Core Processor with Shared-Memory and Message-Passing Communications
Shaik Mahmed basha, G.Nageswararao,(phD)
A 16-core processor with both message-passing and shared-memory inter-core communication mechanisms is implemented in 65 nm CMOS. Message-passing communication is enabled in a 36 Mesh packet-switched network-on-chip, and shared-memory communication is supported using the shared memory within each cluster. The processor occupies 9.1 and operates fully functional at a clock rate of 750 MHz at 1.2 V and maximum 800 MHz at 1.3 V. Each core dissipates 34 mW under typical conditions at 750 MHz and 1.2 V while executing embedded applications such as an LDPC decoder, a 3780-point FFT module, an H.264 decoder and an LTE channel estimator. Index Terms—Chip multiprocessor, cluster-based, FFT, H.264 decoder, inter-core communication, inter-core synchronization, LDPC decoder, LTE channel estimator, message-passing, multi-core,
Efficient Incremental Clustering of Documents based on Correlation
A.Devender, B.Srinivas, A.Ashok
With this project, a few dynamic file clustering algorithms, namely: Term consistency based Greatest Resemblance Doc Clustering (TMARDC), Correlated Concept primarily based MAximum Resemblance Document Clustering (CCMARDC) and Correlated Notion based Quickly Incremental Clustering Criteria (CCFICA) usually are proposed. From the aforementioned three suggested algorithms this TMARDC algorithm will be based upon term consistency, whereas, the CCMARDC and CCFICA are based on Correlated conditions (Terms and their Associated terms) notion extraction protocol..
High Gain Multi Level Dual Load SEPIC Converter for Incremental Conductance MPPT
B. Narasimha Rao, Mrs. Alamuru Vani
The studies on the photovoltaic system are extensively increasing because of a large, secure, essentially exhaustible and broadly available resource as a future energy supply. However, the output power induced in the photovoltaic modules is influenced by an intensity of solar cell radiation, temperature of the solar cells and so on. Therefore, to maximize the efficiency of the renewable energy system, it is necessary to track the maximum power point of the input source. This paper presents the modeling and simulation of incremental conductance (IncCond) Maximum Power Point Tracking (MPPT) using Multi level Single Ended Primary Inductance Converter (SEPIC) converter and compared its performance with self lift SEPIC converter. The solar panel model is developed using the basic circuit equations of photovoltaic cell. The MPP of a solar panel varies with irradiation and temperature. The IncCond algorithm is used to track the maximum power from the solar panel. The unregulated voltage from the panel will be regulated by using the Multi level SEPIC at the same time it can also supply the medium voltage and high voltage loads. The efficiency of the Multi level SEPIC converter over the self lift sepic converter has been tested by using Matlab/Simulink
A Survey on Detection and Prevention of Black Hole & Gray hole attack in MANET
Monika, Swati Gupta
A mobile ad-hoc network (MANET) infrastructure less dynamic network consists of collection of wireless mobile node that communicates with each other without the use of centralized network. Security in MANET is the most important concern for the basic functionality of network. The malicious node falsely advertises the shortest path to the destination node during the route discovery process by forging the sequence number and hop count of routing message . In this paper ,we have discussed various techniques of detection and avoidance of Black hole and gray hole nodes in MANET .
Ekwonwune Emmanuel Nwabueze, Ibebuogu Chinwe, Sam Ekeke Doris, Ayiam Chukwuma
Assessment Of Computer Managed Instruction For IMO State University, Nigeria
Ekwonwune Emmanuel Nwabueze, Ibebuogu Chinwe, Sam Ekeke Doris, Ayiam Chukwuma
Computer Managed Instruction(CMI) is a Computer Software whose primary purpose is teaching or self learning.This paper was motivated by the fact that Imo State University is under pressure to deliver an efficient and effective teaching process but constraints on staffing with increasing number of students have brought about unfavourable situations in most departments such that problem solving lectures have been an uphill task. This paper therefore seeks to investigate whether the use of Computer Managed Instruction (CMI) has impact on the learners. The methodology involved collection of Data through questionnaires and statistically analyzed using the Weighted Mean Average. The result showed among others that those who use Computer Managed Instruction learn faster and better than those who use the classroom Face me, I Face you learning, also showed the Computer Managed Instruction is an effective aid in the teaching/Learning process.
The power generated from solar panels cannot be directly used in many applications. One of the key reasons is that the sun is not stationary as it keeps moving from east to west. The solar panels are able to receive peak sun light only for a short time period of day when the sun is directly facing the solar panels. During rest of the day, they get only partial sun light. For this purpose “SUN TRACKER WITH POSITION DISPLAY” is one of the optimal way that allows the solar panel to track the sun’s position, ensuring maximum power generation. Such systems are based on a solar collector, designed to collect the sun's energy and to convert it into either electrical power or thermal energy. The literature contains many studies regarding the use of solar collectors to implement such applications as light fixtures, window covering systems.
Design and Implementation of Vedic Algorithm using Reversible Logic Gates
Hemangi P.Patil, S.D. Sawant
Multiplication is one of the important operation in most digital signal processing (DSP) applications. Sometimes the performances of DSP applications is dominated by the speed at which a multiplication operation can be executed. The main goal to design the multiplier is to reduce the delay and power dissipation of a multiplier. Hence improved vedic multiplier is designed to increase the efficiency of the system. Implementing this vedic multiplier with reversible logic additional reduces power dissipation. Vedic multiplier is designed using one of the vedic algorithm," Nikhilam Navatascaram Dasatah which is mean by “All from Nine and the last from Ten”. This method is implemented using reversible logic gates to reduce the power dissipation and number of logic gates. The synthesis and simulation of Nikhilam Reversible algorithm is obtained by using Xilinx ISE 13.2, implementation and detailed design analysis results are given in the paper.
Reduction Of Switching Activity In Noc By Using Data Encoding Schemes
Y.Supraja, R.S.Pratap Singh
As technology shrinks, the power dissipated by the links of a network-on-chip (NOC) starts to compete with the power dissipated by the other elements of the communication subsystem, namely, the routers and the network interfaces (NIs). In this paper we present a reduction of switching activity in network on chip (NOC) by using data encoding schemes. In proposed schemes there is no need to change link architecture and it will reduce transitions (i.e. nothing but switching activity contains both switching transitions and coupling transitions).due to these transitions lot of power consumption is presented at the network on chip and routers so that by using data encoding schemes we can save consumption of energy and dissipation of power without any degradation of performance .
Transformation-based Optimizations Framework (ToF) for Workflows and its Security issues in the Cloud Computing
S.Neelakandan, S.Muthukumaran
A general transformation-based optimization framework for workflows in the cloud. Specifically, ToF formulates six basic workflow transformation operations. An arbitrary performance and cost optimization process can be represented as a transformation plan (i.e., a sequence of basic transformation operations). All transformations form a huge optimization space. We further develop a cost model guided planner to efficiently find the optimized transformation for a predefined goal and our experimental results demonstrate the effectiveness of ToF in optimizing the performance and cost in comparison with other existing approaches. cloud computing offers its customers an economical and convenient pay-as-you- go service model, known also as usage-based pricing. Cloud customers pay only for the actual use of computing resources, storage, and band-width, according to their changing needs, utilizing the cloud’s scalable and elastic computational capabilities. In particular, data transfer costs (i.e., bandwidth) is an important issue when trying to minimize costs
bhupendra Kumar, dr. Ganesh Prasad , madhuresh Kumar
LTE-Advanced communication using in Femtocells Perspective
bhupendra Kumar, dr. Ganesh Prasad , madhuresh Kumar
LTE means Long term Evolution and LTE-Advanced is the better than LTE communication for better mobile coverage. Femtocells are small mobile telecommunications base stations that can be installed in residential or business environments for better mobile or wireless devices coverage. Information and communications technology ecosystem now represents around 10% of the world's electricity generation and increasing continuously day to day due to increasing the base stations of 4G and LTE-Advanced wireless communications network and it may create a power management problem in near future. According to the Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, for 2012-2017, "the overall mobile data traffic is expected to grow to 11.2 Exabyte’s(EB) per year by 2017. To minimize the challenge A femtocell is possible solution, It results in a significantly improved signal quality and substantial cost savings also. The aim of this paper is to examine in a top-down approach the femtocells as an important component of the developing LTE-Advanced Technology, with essential projection into the future of the femto-cellular technology and what the future holds for its deployment for operators and also thbenefits of Femtocells
The desire to send a message as safely and as securely as possible has been the point of discussion since time immemorial. Information is the wealth of any organization. This makes security-issues top priority to an organization dealing with confidential data. Whatever is the method we choose for the security purpose, the burning concern is the degree of security. Steganography is an art of hiding the secret message in a cover object without leaving a remarkable track on the original message. It is used to increase the security of message sent over the internet. In contrast to cryptography, it is not used to scramble the data but it is used to conceal the data in digital media. Thus, this paper reviews various data hiding techniques and briefly explain Steganography.
Optimized Multimodal Medical Image Fusion Approach Based On Phase Congruency And Directive Contrast In Nsct Domain
Kolusu Varalakshmi, N.Raj Kumar
Although tremendous progress has been made in the medical image processing in past decade for evaluation of the clinical information based on obtained medical images, still there exist a number of problems. In medical image processing some advert cases of clinical analysis have been recorded where physician fails to analyze the patient scenario based on single medical source image. In this paper a novel medical image fusion work based on NSCT domain has been presented which proves to be efficient than conventional approaches. In conventional algorithms no relevant research has been carried out to get detailed low frequency and high frequency coefficients which helps further in reliable fusion process. In proposed method phase congruency and directive contrast are used to yield reliable analytical analysis of low frequency and high frequency coefficients. Finally reconstructed fused image has been proposed based on acquired composite coefficients. In experimental results performance gain of proposed method can be clearly seen over the conventional approaches and this multimodal fusion approach has been successfully conducted on Alzheimer, subacute stroke and recurrent tumor which shows clinical ability of the proposed method in terms of good accuracy and better performance.
Papr Reduction In Ofdm System By Using Clipping And Filtering And Weighted Method
S. Murali Krishna, R. Sudheer Babu
In this paper, a peak-to-average-ratio (PAPR) reduction scheme supported a weighted orthogonal frequency-division multiplexing (OFDM) signal is projected to cut back the PAPR while not distortion in removing the burden at the receiver aspect. within the projected theme, a weight is obligatory on every separate OFDM signal via a definite reasonably a band limited signal, associate degreed an OFDM signal fashioned with the weighted separate data is then thought of before a high power electronic equipment (HPA), whereas the original signal are often recovered fully at the receiver aspect. Meanwhile, the time period required to transmit the weighted OFDM signal is that the same because the time period for the first OFDM signal. The performance of the proposed scheme is estimated with Mat lab Simulator. With respect to the theoretical analysis, the weighted OFDM signal PAPR is smaller than that of the clipping and filtering (C&F) methodology, and also the bit-error-rate (BER) performance of the weighted OFDM system is improved compared with the C&F methodology. Here, the projected methodology is easier than the C&F method.
Prevention Of Cross Site Scripting Attack Using Filters By String Based Approach
S. Karthika Devi, Dr.E. Ramaraj
Among the Web application vulnerabilities Cross Site Scripting attack is most common attack. It is a kind of attack in which the intruder can able to change the entire code of the process by hooking unnecessary data along with the code of data. It becomes a challenging issue to sanitize every user query form through which the malicious code would be hooked. In this paper a method is proposed, by which the Cross site scripting attack on web applications will be considerably reduced. The proposed method provides single solution to various kinds of attacks that is created by the attackers. The main objective is to prevent the attack, by incorporating the data dictionary along with the client side scripting rather than separate arrangement. Our approach is examined with real web application and results are evaluated. From the experimental results it is analyzed that by using the method, it does not need a very long rule generation or separate data dictionary. This method reduces time complexity, without random generation of input values. The implementation shows that the proposed method works well for the real time cross site scripting attacks.
Dr. Vibhakar Shrimali, Dr. Sibaram Khara, Mr. Abhishek Gaur
An Algorithmic approach to improve Network Performance for 1G-EPON
Dr. Vibhakar Shrimali, Dr. Sibaram Khara, Mr. Abhishek Gaur
This paper proposes a novel ‘TD-sense algorithm’ for improve the network performance and compares it with a standard DBA_GATED algorithm. The algorithm is implemented by emulating an access network with the use of two 1G transceivers for 1G EPON. The algorithm is tested for Triple Play services that include simulated voice, video and data packets. The algorithm is found to maintain a better tradeoff between throughput and delay, than the existing DBA_GATED algorithm
SQL Injection Detection Based On Replacing The SQL Query Parameter Values
R.Latha, Dr.E. Ramaraj
Information is converted to digitalized format and then flows through the network medium. Security mechanisms are mostly used to protect information from unauthorized intruders on the network. Secure communication between the medium as well as between the communicating entities is an essential part. There exist many types of attacks in which the SQL Injection is considered for the proposed work. This paper proposed a novel method for the detection and proper replacement to the affected queries. SQL Injection is one of the major attacks which will leaks the valuable information to the intruders. SQL Injection attacks target databases that are accessible through frontend structure of the website, and made flaws in the input validation logic of its components. Therefore, a strong method is needed to overcome the dispute. This paper proposed an efficient method for detecting the SQL injection by manipulating input attributes of the SQL query and measuring the distance of query strings. It satisfies the both query analysis for both the static and dynamic manipulation of user queries.
Data Center Management: Viewing Racks in the Network through Dot net platform
Ahad Abdullah
Data Center Infrastructure Management (DCIM) captures detailed real-time information about energy use across our data centers and operational facilities, enabling us to measure, trend, alert, and take action. DCIM also helps us visualize our data center environment in 3D and manages the power space and cooling capacity of your environment to better meet our business objectives. The baseline is being provided from the solution through which one can measure cost savings, improvements in operational reliability performance, cost savings as well as help to deliver continuous information for ongoing improvement. It enables data collection from multiple protocols and systems which further reduces manual efforts and thus provides visibility into power and space utilization, allowing for real-time decision making in a smart way. The proposed work utilizes dot net technologies to help us to perform the DCIM tasks. We have successfully deployed an application for the task and the report further signifies the importance of DCIM. The racks are visible in the network where and how they are connected which helps to monitor, and regulate their capacities improving cloud computing services of DCIM. We propose to develop state of the art technology and further initiate research and development work in cloud associated services.
A new multiple-input multiple-output (MIMO) transmission scheme is proposed and it is called space-time block coded spatial modulation (STBC-SM. This scheme is a combination of both Space Time Block Codes and Spatial modulation. By combining we can avoid the drawbacks of the both systems and make use of the advantages of the system.The transmit information symbols are expanded both in space and time domains and also to the spatial (antenna) domain which corresponds to the on/off status of the transmit antennas available at the space domain.
A general technique is presented for the design of the STBC-SM scheme for any number of transmit antennas. Besides the high spectral efficiency advantage provided by the antenna domain, the proposed scheme is also optimized by deriving its diversity and coding gains to exploit the diversity advantage of STBC. A low-complexity maximum likelihood (ML) decoder is given for the new scheme which profits from the orthogonality of the core STBC. The performance advantages of the STBC-SM over simple SM and over V-BLAST are shown by simulation results for various spectral efficiencies and are supported by the derivation of a closed form expression for the union bound on the bit error probability.
Data Encoding And Decoding Techniques For NOC Applications
Swetha M, Mr. Sunil G
In this paper we present the data encoding and decoding schemes are used to reduce the power dissipation of the communication system in a Network-on-Chip (NoC). In NoC the main source of power dissipation by the network on chip links. The self-switching and coupling-switching activities are caused for link power dissipation. In this paper we present a set of data encoding and decoding schemes are mainly operating at flit level and on an end-to-end basis, which allows us to minimize the both self and coupling switching activities. The self-switching is reduce by checking the switching transition and then the coupling technique is included with the wormhole routed network, that is flits are encoded by the network interface before they are entered in to the network and are decoded they are focused on reducing hardware.
Inter frame Video Duplication Forgery Detection: A Review
Harmanpreet Kaur, Manpreet Kaur
Identifying inter-frame forgery and tampering a video is a challenging topic in video forensics. Tampering involves falsification with video content in order to cause damage or make unauthorised modification/alteration. These tampering impacts need to be studied. Till now the studies employed a video authentication method that detects and determines both frame duplication and region duplication in terms of video forgery, and locates factors that impact video forgery. In this review paper all the proposed techniques based on video duplication forgery has been studied to test whether the video is real or not.
Aim of Software testing is to evaluate an attribute or capability of system and determine that whether it meets required prospects. The most rationally challenging part of testing is to design of test cases. Now a days, UML has been extensively used for object oriented modeling and design. This is due to the fact that UML metamodel extends support to describe structural and behavioural aspects of an architecture. However, it is still difficult to understand this behaviour, because the size of automatically generated model diagrams tends to be huge. To overcome this problem of software visualization model based slicing technique has been developed. Model based slicing is a breakdown technique to extract and identify relevant model parts or related elements across diverse model views. We have proposed a novel procedure to extract the sub- model from a big model diagrams on the basis of slicing criteria. The proposed methodology use the concept of model based slicing to slice the sequence diagram to extract the desired chunk. In the presented approach UML, conversion of UML into XML, Java DOM API for parsing and slicing has been used. Then by using the Editor Extracted Sequence Diagram has been generated.
The world wide web contains huge amount of data and it contains numerous websites that is examined by a tool or a program known as Crawler. Due to the richness of the information contributed by lots of internet users every day, internet forum sites have become valuable deposits of material on the web. As a result, mining knowledge from forum sites has become more important and more significant. The main objective of this paper is to focus on the web forum crawling techniques. In this paper, the various techniques of web forum crawler and challenges of crawling are discussed. The paper also gives the outline of web crawling and web forums.
Nashik is a pilgrimage place and faces Kumbhmela in every 12 year. Being a place crowded with people, garbge disposal is a major problem. Lumping of garbage is found in city which may infect people and produce foul smell. Despite of creating awareness among the citizens, the waste is not yet being disposed in the bins provided by the Nashik Municipal Corporation. This paper brings to you an interactive solution for this, where people will anxiously participate in cleaning the city while they get some General knowledge.
Profit Maximization Of SAAS By Reusing The Available VM Space In Cloud Computing
Richa, Hari Singh
Cloud computing has recently emerged as one of the buzzwords in the IT industry. SaaS layer is the upper layer of the cloud-computing model. It offers consistent access to software applications to the clients over the Internet without direct investment in infrastructure and software. It is a service provided to client in terms of applications running on the cloud-computing infrastructure hosted by the service providers. The main work of this layer is to deliver the software application services over the internet with in the real time when users demand for it in pay per use manner. The main concern of this paper is on how to minimize the infrastructure cost and maximize the SaaS provider profit for that two algorithms are proposed. The proposed algorithms are designed in such a way that works on the mean value which efficiently uses the available VM Space to handle the client request instead of creating a new VM to deliver the services. And by that the SAAS provider has not to pay the extra penalty cost and hence they get a profit to serve the request of clients with their satisfaction and without violation of SLA.
A Fuzzy Improved ACO Approach for Test Path Optimization
Ritu, Ajay Dureja
To improve the software reliability it is required to test each aspect of software system. But sometime, the testing also increases the cost and time of software release. Because of this, it is required to execute the only the effective test cases undere some controlled mechanism. In this paper, a fuzzy improved ACO approach is defined to generate the optimized test path. To assign the weightage to test cases, the module interaction analysis and fault analysis are used collectively in this work. The experimentation is performed on some real time programs. The results shows that the proposed approach has reduced the overall test cost cost of software system
Building Secure, Concurrent, Distributed DBaaS with Confidential Data over Threshold Access Structure
A.Blessy, Varun Chand
In a cloud,Guaranteeing confidentiality within the Database as a service (DBaaS) paradigm remains a problem. Here, we propose ConfidentialConcurrenttoSecureDBaas with some modifications to produce availability, security, accessibility and reliability without exposing unencrypted information to the cloud provider. It additionally permits multiple, freelance and regionally distributed clients to execute synchronal operations on encrypted data by eliminating the need for intermediate server between the cloud consumer and also the cloud provider; to preserve data confidentiality and consistency at the client and cloud level; To achieve this, ConfidentialConcurrentAccessToDBaaS integrates existing cryptographic schemes, isolation mechanisms, and novel strategies for management of encrypted metadata on the untrusted cloud database.
Aperiodic task Scheduling Algorithms for Multiprocessor systems in Real Time environment
Nirmala H, Dr.Girijamma
Multiprocessor systems contains multiple processors either homogeneous or heterogeneous, scheduling tasks for such system is very critical and hence scheduling protocol should be followed for optimality.
Scheduling algorithms gives the scheduler a set of protocols to manage the real time systems. In this paper we present an overview of aperiodic task scheduling algorithms servers for real-time systems on multiprocessor systems and method is proposed for the aperiodic task having communication delay which can be scheduled using a Genetic Algorithm (GA)
Improved Degraded Document Image Binarization Using DBMF And Edge Pixel Stroke
Tarnjot Kaur Gill, Mr.Aman Arora
Document binarization has the purpose to decrease the quantity of information present in the document and to watch out only appropriate data which permits applying document analysis depending on binarization algorithm. From the survey, it has been originate that not any of the existing technique gave proficient results in document binarization. So, to determine this matter, a new algorithm for recuperating degraded document has been proposed. The proposed technique consists of three parts. The first part has the accountability to pertain the Decision based switching median filter image filter to smooth and restore a degraded document. The second part applies adaptive image contrast enhancement to text and background variation caused by different types of document degradations followed by actual binarization. The algorithm has been designed and implemented in MATLAB using image processing toolbox. The comparative analysis has shown the improvement in results of proposed algorithm on the basis of various performance metrics.
This Weighted Guided Image Filtering(WGIF) is which involves the smoothening technique. It involves the condition that can cocerve sharp edges. The edges are given higher weightage than other areas. This is actually an extension on filtering.. The images get fused using the weighted guided filter. For fusion the image decomposed to base layer and detail layer. Using this filter the image get fused and obtain more visual quality output.
Power saving is an important factor in Mobile Ad hoc Networks, because most of the equipment in MANETs are battery powered. So we have to reduce the power consumption of each and every node. There are several methods to reduce the power consumption of nodes. An effective method is sending the packets with optimum power. In this paper, we propose an energy efficient power aware routing (EPAR) protocol, a new power aware routing protocol. The objective of the proposed protocol is to increase the service lifetime of MANET with dynamic topology. Basically, EPAR is an improvement of DSR (Dynamic Source Routing) protocol. In EPAR path is chosen based on hop count and mobility in addition to energy unlike DSR where path is chosen based only on minimum number of hops. The proposed approach is a dynamic distributed, load balancing approach that avoids power-congested nodes and chooses paths that are lightly loaded. This helps the protocol to achieve minimum variance in energy levels of different nodes in the network and maximizes the network lifetime. This improves the throughput of our network and the main motivation is to reduce the power consumption of each and every network.
The analysis of microscopic and macroscopic objects in the real world needs processing of multiple parameter data. The value of each parameter in the three dimensional space varies from point to point. Hence compression of such images poses multiple challenges. In this paper an attempt is made to provide a framework for compressing image data of such objects.
Medical Image Fusion Using Redundant Wavelet Based ICA Co-Variance Analysis
Rajveer Kaur, Er. Gurpreet Kaur
In this paper the discussion is on research work dealing with medical image fusion combination utilizing the wavelet changes occurring for images in frequency domain is discussed. We have manufactured a model framework that permits experimentation with a different wavelet which exhibit reduced blend ratio and control techniques for fusion combination, utilizing an arrangement of essential operation of ICA or independent component analysis on wavelet decomposed bands. The fusion system concentrates utilizing a wavelet condition called when a valid number of components are needed called RDWT with great information handling capacity and enhancing traits. The technique enhances a standard wavelet merger for blending the lower recurrence segments of a multi-imaged data and using deviation rules with weighting normal absolute values. The systems was tested numerically with the previous approach of PCA analysis on the basis entropy calculation, fusion factor for fused and original images, the PSNR index values, MSE metric and the STD standard deviation of the fused image. The experimental results show the superiority of the proposed system over past approach.
A Novel Fuzzy Based Single Switch Converter For High Speed Srm Drive Applications
G. Siva Kumar, B. Raja Sekhar
The design of the fuzzy logic controller is the voltage control action as feedback for significantly improving the dynamic performance of converter. The switches pertaining in Switched reluctance motor (SRM) drive is receiving increasing attentions from various researchers as well as viable candidate for adjustable speed and servo applications. Switched Reluctance Motors have a simple structure and inherent mechanical strength without rotor winding. A new converter for SRM which uses only one switch per phase consisting an important characteristic “fast phase current commutation” is used to improve dynamic performance, Speed Range and efficiency. Recently, the demand of high speed drives has been increased due to its mechanical advantages. High speed drive systems are widely used in the industrial applications such as blowers, compressors, pumps and spindles due to the compact size and high efficiency. This paper presents a new fuzzy controller design for a switched reluctance motor drive system. The simulation results based on Matlab/Simulink are discussed in detail.
Comparison of Improved Routing Protocol (IPL_AOMDV) and Existing Routing Protocol (PL_AOMDV) For Wireless Ad-hoc Network
Gurpreet Kaur, Naveen Kumari
Multipath routing protocol is the most important aspect of Ad-Hoc network, the performance of Ad-hoc network mainly depends on routing protocol this paper address the problem of existing multipath routing protocol and also discussed the new routing protocol namely IPL_AOMDV, and also compare them in terms of Load balance, congestion control and power control parameter and found that IPL_AOMDV outperform the existing routing protocol in each aspect.
Improving the Security of Workflow-based System using Multiple XML Digital Signature
Mayuri Anil Jain, Prof. Uday Joshi
Private companies and Government Organizations all around the world make huge investments for the automation of their processes and in the management of the electronic documentation in recent times. The main requirement in the management of digital documentation is its equivalence, from a legal perspective, to paperwork, affixing a signature on a digital document is the fundamental principle; which are based the main processes of authorization and validation, apart from the specific area of application. Main benefits for introduction of digital signing processes are cost reduction and complete automation of documental workflow, including authorization and validation phases. In essence, digital signatures allow you to replace the approval process on paper, slow and expensive, with fully digital system, faster and cheaper.
When users uses the usual software tools they must have to change many valued logical thinking (approximate reasoning) within the two-valued computer logic. Although the Structured Query Language (SQL) is a very influential tool, it is not easy to satisfy needs for data selection based on linguistic terms and degrees of truth. In this paper, we are attentive in flexible querying which is based on fuzzy set theory. Medina et al. have developed a server named fuzzy SQL, associate flexible queries and based on a theoretic model called GEFRED. For modelling the flexible queries alongwith the concept of fuzzy attributes, an addition of the SQL language named fuzzy SQL has been defined. The FRDB has already been well-defined by the user. In this paper, we prolong the work of medina et al. to implement a new architecture of fuzzy DBMS based on the GEFRED model. The architecture is built on the theory in which we handle the weak coupling with the DBMS SQL Server.
Comparison of AOMDV With and Without Black Hole Attack
Ritika Sharma, Bhawna Singla
The popularity of MANETS is increasing day-by-day as users choose to connect to a network irrespective of their geographical position. Because of this exceptional feature of MANETs, they are open to a huge amount of malicious activity. Black Hole attack is one kind of threat in MANETs in which the data of the network is routed towards a node which drops all the packets entirely. In this paper we propose a feasible solution to find and prevent black hole attack that can be implemented using AMODV protocol. Also, to develop simulations to analyze the performance of proposed solution based on various security parameters like Packet Delivery Ratio with and without black hole.
Development and Characterization of Hybrid Natural Fiber Reinforced Composites based on Poly- propylene Resin
G.M Neeha, E.Srikanth
Recently natural fibers have been receiving considerable attention as substitutes for synthetic fiber reinforcements in plastics due to their low cost, low density, acceptable specific strength, good thermal insulation properties, reduced tool wear, reduced thermal and respiratory irritation and renewable resources. The development of natural fiber usage in India has now reached an appreciable stage. The present project work is taken up to develop a Natural Fiber Reinforced Composite material to get desired mechanical properties so that it can replace the existing Synthetic Fiber Reinforced Composite material for a suitable application.
Public Sentiment Interpretation On Twitter - A Survey
R.Urega, Dr.M.Devapriya
Opinion mining and sentiment analysis is the significant key field of Natural Language Processing. Number of users shared their thoughts and opinions on micro blog services. Social networks serve as the source of valuable platform for tracking and analyzing public sentiment of different people about different domains/products. Twitter is one of the most popular social sites where the millions of users share their opinions. Public sentiment analysis is a process to identify positive and negative opinions, emotions and evaluations in text. This paper reviewed and analysed number of techniques for public sentiments analysis and its classification.
Confined Decentralize Cloud Storage Using Authentic Access Contol Scheme
Ms.C.Bindu , Mrs.G.Nalini
Cloud computing multi-tenancy feature, which provides privacy, security and access control challenges, because of sharing of physical resources among untrusted tenants. In order to achieve safe storage, policy based file access control, policy based file assured deletion and policy based renewal of a file stored in a cloud environment, a suitable encryption technique with key management should be applied before outsourcing the data. This paper implements secure cloud storage by providing access to the files with the policy based file access using Attribute Based Encryption (ABE) scheme with RSA key public-private key combination. Private Key is the combination of the user’s credentials. So that high security will be achieved. Time based file Revocation scheme is used for file assured deletion. When the time limit of the file expired, the file will be automatically revoked and cannot be accessible to anyone in future. Manual Revocation also supported. Policy based file renewal is proposed. The Renewal can be done by providing the new key to the existing file.
Study of Growing Grapes Technique for Malware Detection
Mr. Bhushan Kinholkar
The Behavior based detection is promising to solve the pressing security problem of malware. The great challenge lies in how to detect malware in a both accurate and light-weight manner. The Behavior based detection method, named growing grapes, aiming to enable accurate online detection. It consists of a clustering engine and detection engine. The clustering engine groups the objects, e.g., processes and files, of a suspicious program together into a cluster, just like growing grapes. The detection engine recognizes the cluster as malicious if the behaviors of the cluster match a predefined behavior template formed by a set of discrete behaviors. Malware based on multiple behaviors and the source of the processes requesting the behaviors. Light-weight as it uses OS level information flows instead of data flows that generally impose significant performance impact on the system. To further improve the performance, a novel method of organizing the behavior template and template database is proposed, which not only makes the template matching process very quick, but also makes the storage space small and fixed. Furthermore, the detection accuracy and performance are optimized to the best degree using a combinatorial optimization algorithm, which properly selects and combines multiple behaviors to form a template for malware detection. Finally, malicious OS objects in a cluster fashion rather than one by one as done in traditional methods, which help users to thoroughly eliminate the changes of a malware without malware family knowledge.
Stochastic Analysis Of The LMS And NLMS Algorithms For Cyclostationary White Gaussian Inputs
S.Amarnath Reddy, G.Amjad Khan
This paper studies the stochastic behavior of the LMS and NLMS algorithms for a system identification framework when the input signal is a Cyclostationary white Gaussian process. The input Cyclostationary signal is modeled by a white Gaussian random process with periodically time-varying power. Mathematical models are derived for the mean and mean-square-deviation (MSD) behavior of the adaptive weights with the input Cyclostationary. These models are also applied to the non-stationary system with a random walk variation of the optimal weights. Finally, the performance of the two algorithms is compared for a variety of scenarios.
Grid computing is the collaboration of interconnected computing resources ready to be served for the user’s request for complex and time consuming processes. Task scheduling is one among the thrust research areas of grid computing as it can be viewed as an NP- complete problem. Task scheduling is still complicated as the resources in the grid environment have unique characteristics in nature. Thus, the allocations of resources have limited opportunities in finding out the optimal solution. Over the past decades, many researches have been proposed on heuristics task scheduling algorithms that contributed a substantial impact on the performance of task scheduling. Unfortunately, the algorithms are still lacking in producing cent percent of optimal solution in the allocation of resources to the needy systems. The evaluations of the optimal solution are also found more difficult to prove the efficiency of the algorithms. Therefore, this review is motivated to present the depth study on grid computing environment, existing heuristics scheduling algorithms and their significances. This paper also includes the comparative analysis of the reviewed algorithms using the most dominating parameters.
A Review on Energy Efficient Clustering Techniques in WSN
Amanjot kaur, Navreet Kaur, and Ap Diviya Bharti
Energy saving to prolong the network life is one of the major design issue while developing a new routing protocol for wireless sensor networks(WSNs). Clustering is key mechanism in large multi-hop wireless sensor networks for obtaining scalability, reducing energy consumptions and increase the life time of network to achieve better network performance. In literature various clustering approaches are proposed. In this article we present a survey on different energy efficiency clustering techniques and working of few of them and distinguish them according to operational mode and state of clustering.
A Data mining model for Customer Relationship Management -A Review
Rekha K S
Data Mining is an analytic process designed to explore data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. Data mining is challenging area and its possible applications in business provides an emerging research topic. It is about analyzing data patterns to extract knowledge for optimizing the customer relationships.
Component-Based: The Right Candidate for Restructuring the Nature of Software Development in Organizations
Badamasi Imam Ya’u
Component-based software development is an emerging field in software engineering aims toward the cost effective development of composite components or a complete system by reusing pre-built components or subsystems that are perhaps stored in a repository. In this paper the techniques of current software model such as objects and classes in object-oriented programming language and architecture models have been studied and hence coming up with the rationale of using a component-based software model with a unique property of compositionality and encapsulation as contrarily to what happens in the current component models.
The Role of Informatics in Cancer Personalized Medicine
Badamasi Imam Ya’u
No doubt, significant number of people have been suffering from an adverse effects of medications especially that of traditional or self-medication to the extent that deaths occur. These effects happen as a result of mismatches of biological makeup of individuals taking these kinds of medications. This paper presents the state of the art of Personalized Medicine and Informatics in treatment of Cancer as well as some challenges the Informatics faces in Personalized Medicine in Cancer.
Implementation Of I2c Multi Task And Multi Slave Bus Controller Using Verilog
Shaik.Fazil Ahmed Y.Murali,,M.Tech
This paper implements serial data communication using I2C (Inter-Integrated Circuit) master bus controller using a field programmable gate array (FPGA). The I2C master bus controller was interfaced with MAXIM DS1307, which act as a slave. This module was designed in Verilog HDL and simulated in Modelsim 10.1c. The design was synthesized using Xilinx ISE Design Suite 14.2. I2C master initiates data transmission and in order slave responds to it. It can be used to interface low speed peripherals like motherboard, embedded system, mobile phones, set top boxes, DVD, PDA’s or other electronic devices.
Implementation of mobile crowd network for equalized distribution of work processing
Vandana Sahu, Prof. Santosh Tamboli
In this paper we are going to implement distributed framework for performing a simple computation, on mobile computing devices. A central server should take a large computation and decompose it. Mobile computing devices will then be allowed to connect to the server. Connecting, the devices will assign tasks to complete, and upon completing the tasks, the results will be sent back to the server to be mapped.
The inherent problems of mobile computing such as resource scarcity, security and low connectivity pose problems for most applications.
However, the dynamic nature of mobile computing makes sharing and coordinating work difficult.
We help by pooling together the processing power of mobile devices within a crowd to form mobile cloud. We explore this concept of ‘work stealing’ crowd computing in a distributed processing on an opportunistic network and focus on the optimized processing of work. Current work stealing mechanism, security can be specified wherein the data transmitted across the crowd network will be secured using encryption techniques and the system Algorithm is generated which states the techniques and working.
Reconfigurable Modified Viterbi Decoder For A WiFi Receiver
Y.Radhika, G.Mukesh,M.Tech
In every digital communication system, convolutional codes are the most efficient forward error correcting codes. Viterbi decoders are used to decode the convolutional codes. Together, Convolutional encoding with Viterbi decoding forms a powerful FEC technique when the message is corrupted by AWGN in a channel.General Viterbi Algorithm (VA), requires an exponential increase in hardware complexity to achieve greater decoder accuracy. When the decoding process uses the Modified Viterbi Algorithm (MVA), computations significantly gets reduced and results in the reduction of hardware utilization, which follows the maximum likelyhood path. In this paper, we present a Convolution Encoder and Viterbi Decoder with a constraint length of 3 and code rate of ½ and the results for hardware utilization of general VA and modified VA are compared. This is realized using Verilog HDL. The simulation and synthesis is done using Xilinx 14.3 ISE. The desin is implemented in FPGA VIRTEX kit.
Power-Management Strategies for a Grid-Connected PV-FC Hybrid System
Sk.Riyaz, K.Sathish Kumar M.Tech,
This paper presents a method to operate a grid connected hybrid system. The hybrid system composed of a Pho-tovoltaic (PV) array and a Proton exchange membrane fuel cell (PEMFC) is considered. The PV array normally uses a maximum power point tracking (MPPT) technique to continuously deliver the highest power to the load when variations in irradiation and temperature occur, which make it become an uncontrollable source. In coordination with PEMFC, the hybrid system output power becomes controllable.Two operation modes, the unit-power control (UPC) mode and the feeder-flow control (FFC) mode, can be applied to the hybrid system. The coordination of two control modes, the coordination of the PV array and the PEMFC in the hybrid system, and the determination of reference parameters are presented. The proposed operating strategy with a flexible operation mode change alwaysoperates the PV array at maximum output power and the PEMFC in its high efficiency performance band, thus improving the performance of system operation, en-hancing system stability, and decreasing the number of operating mode changes.
An RTOS Architecture for Industrial Wireless Sensor Network Stacks with Multi-Processor Support
D.saritha, R.Prashanthi M.Tech
This paper is an RTOS based architecture designed for the purpose of data transmission between two controlling units through IWSN without collision. RTOS is a Process which will be done between hardware and application. Here, stack is the one which is used to avoid the independency of the layers from one with another inside the protocol comes under the standard IEEE802.15.4.Stack having two techniques (PAL and NILI) we are using in the IEEE 802.15.4 to reduce the collision and timing. Mostly, during the packets transmission some collision may occur. This collision has to be avoided to prevent the data loss during the transmission. The project deals with the data transmission between two units in the exact time without any collision. The data transmission time is increased with the protocol standard. One of the section runs with RTOS and LPC2148 as master node and another as normal data acquisition node to which sensors are connected. Data acquisition node uses the Peripheral Interface controller. Communications between two nodes (hardware and application) are accomplished through IEEE 802.15.4.
Implementation of Interface between AXI and DDR3 memory Controller for SoC
Onteru Sreenath, Syed kareemsaheb Associate Prof.
This paper describes the implementation of AXI compliant DDR3 memory controller. It discusses the overall architecture of the DDR3 controller along with the detailed design and operation of its individual sub blocks, the pipelining implemented in the design to increase the design throughput. It also discusses the advantage of DDR3 memories over DDR2 memories and the AXI protocol operation. The AXI DDR3 Controller provides access to DDR3 memory. It accepts the Read / Write commands from AXI and converts it into DDR3 access. While doing this it combines AXI burst transactions into single DDR access where ever possible to achieve the best possible performance from DDR3 memory subsystem. The AXI DDR3 Controller allows access of DDR3 memory through AXI Bus interface. The controller works as an intelligent bridge between the AXI host and DDR3 memory. It takes care of the DDR initialization and various timing requirements of the DDR3 memory. The controller implements multiple schemes to increases the effective memory throughput commands. It operates all the memory banks in parallel for attaining the maximum throughput from the memory and minimizes the effect of precharge /refresh and other DDR internal operations. Design is simulated in Modelsim 10.4a and synthesis on Xilinx ISE tool to report the area ,power, and delay.
Detecting Phishing Websites Based On Improved Visual Cryptography
Reshma Ramdas T
Most of the time people using internet for online transactions from one place to another place because of its cost effectiveness and ease of use. So it is necessary to ensure the security of such transaction but unfortunately an attacker can attack on system online or offline. Among a various kind of internet attacks, phishing is identified as a major security threat and all the prevention mechanisms failing in front of that, so prevention mechanism should be very strong. Phishing is defined as an identity theft attempt in order to obtain confidential and private information of individuals or companies for monetary or other gains. If these threats are not addressed thoroughly, people can’t trust online transactions that involve due authentication through credentials. A visual cryptography based technique will automatically preserve the privacy of captcha during registration. It is done by dividing the original image captcha having the password into two shares which are to be stored in different databases. The decryption is possible only when adversaries can provide both shares at a time. The individual shares can’t reveal the original captcha. In the proposed methodology discusses the security to the user share by implementing AES PKC5 Padding encryption algorithm which guarantees the confidentiality of the user share from attacks.
- The paper presents the current trends which are being faced on in the field of the security in the banking. Online banking has become progressively necessary to the profit of economic establishments because the range of consumer’s victimization on-line banking will increase, and on-line banking systems have become additional fascinating targets for criminals to attack. To maintain the customers’ trust and confidence in the security of any on-line bank accounts, money establishments should establish. However, attackers compromise accounts and develop ways to safeguard them. The distinctive facet regarding security in industry is that the protection posture of bank doesn't rely entirely on the safeguards and practices enforced by the bank’s, it's equally addicted to the attention of the users, victimization of the banking channel and also the quality of finish of user terminals. This makes the task for safeguarding data confidentiality and integrity, a larger challenge for the industry [1].
Hyperspectral Image Denoising with a Spatial – Spectral View Fusion Strategy
Princee Parul, Rakesh Gandhi
Image fusion is a generally utilized technique to coordinate that information, while image enlistment and radiometric standardization are two essential methods in changing multi-temporal or multi-sensor information into indistinguishable geometric and radiometric bases individually. Image fusion procedure can be characterized as the reconciliation of data from various enlisted images without the presentation of twisting. It is regularly unrealistic to get an image that contains every important protest in core interest. This paper talks about different types of image fusion methods. All these accessible procedures are intended for specific sort of images. As of not long ago, of most elevated pertinence for remote detecting information preparing and investigation have been strategies for pixel level image combination for which a wide range of routines have been created and a rich hypothesis exists. Analysts have demonstrated that combination procedures that work on such components in the change area yield subjectively preferred melded images over pixel based methods. For this reason, feature based fusion methods that are normally in light of experimental or heuristic tenets are utilized. The aim of the paper is to expound extensive variety of calculations their similar study together. There are numerous systems proposed by diverse research with a specific end goal to meld the images and produce the reasonable visual of the image.
Analysis of Star, Tree and Mesh Optical Network Topologies for successive distance between nodes Using Optimized Raman-EDFA Hybrid Optical Amplifier
Rajpreet Singh, Dr. Sanjeev Dewra
Analysis of the performance of star, tree and mesh topologies is made for successive distance between the nodes at the bit rate of 15 Gbps . The number of users supported at different distances is calculated in terms of Quality Factor and BER for the input signal power of -40dBm. An optimized Hybrid Raman-EDFA amplifier is used for post amplificion. Mach-Zehnder modulator at extinction ratio of 30dB with symmetry factor of -1 is used for the modulation of signals. The spacing between the channels is taken as 0.1 THz with the line width of 1 MHz. It is observed that Star topology is capable to support more users than other topologies for a distance of 6Km in terms of quality factor and BER.
Potential based similarity metrics for implementing hierarchical clustering
M.K.V.Anvesh, Dr. B. Prajna
The main aim of the data mining process is to extract information from a large data set and transform it into an understandable form for further use. Clustering is important in data analysis and data mining applications. It is the task of grouping a set of objects so that objects in the same group are more similar to each other than to those in other groups (clusters). A prominent clustering is hierarchical clustering. Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. When performing hierarchical clustering, some metric must be used to determine the similarity between pairs of clusters. Traditional similarity metrics either can only deal with simple shapes or are very sensitive to outliers. Potential - based similarity metrics, Average potential energy similarity metric and Average maximal potential energy similarity metric have special features like strong anti-jamming capability and they are capable of finding clusters of complex irregular shapes.
A Review on HTTP Streaming Strategies in Media Streaming
Hina Rani, Er.khushboo Bansal
Streaming is mostly referred as a delivery system for media content or dynamic data. HTTP was not designed for data streaming. HTTP communications are stateless, and they take place over TCP/IP where there is no continuous connection between the ends. Usually, HTTP responses are buffered rather than streamed. HTTP 1.1 added supports for streaming through keep-alive header so data could be streamed. There are various HTTP streaming strategies-Progressive http streaming, HDS (HTTP dynamic streaming), HLS (HTTP live streaming), HSS (Http smooth streaming), DASH(Dynamic adaptive streaming over http and some others. HTTP streaming works by breaking the overall stream into a sequence of small HTTP-based file downloads, each download loading one short chunk of an overall potentially unbounded transport stream. My focus is on http streaming. In this paper, a review of various streaming strategies has been presented
A Review: Image enhancement approaches using Histogram Equalization
Lalit, Ankur Gupta
Image enhancement is to process the image for getting more suitable result than original image for specific parameters. Many images suffer from poor contrast. Image enhancement improves an image appearance by increasing dominance of some features between different regions of image. Histogram equalization (HE) is that which is widely used for enhancing contrast in various parameters due to its simple function. In this paper various enhancement techniques in literature are discussed with their effects and methodologies
Performance Evaluation of Secure tunnel technique using IPv6 Transition over IPv4 Channel
Kamna Chauhan, Pooja Jain
Sharing of data and assets among distinctive gadgets oblige organizing. As systems are growing step by step, Web Protocols are increasing more fame. Distinctive move systems have been set up but then a great deal of exploration is to be completed. The cutting edge Internet Protocol, at first known as IP Next Generation (Ipng), and afterward later as IPv6, has been created to supplant the present Internet Protocol (too known as IPv4). To empower the incorporation of IPv6 into current systems, a few move components have been proposed by the IPng Transition Working Bunch. This work looks at and observationally assesses two move components, to be specific 6-more than 4, and IPv6 in IPv4 burrowing, as they identify with the execution of IPv6.[1] We investigate the effect of these methodologies on end-to-end client application execution utilizing measurements, for example, throughput, inactivity, host CPU usage, TCP association time, and the quantity of TCP associations per second that a customer can set up with a remote server. All analyses were led utilizing two double stack (IPv4/IPv6) switches and two end-stations running stacked with a double IPv4/IPv6 stack.[2]
Dhusia Kalyani, Rizvi Z. Ahsan, Firoz Neda, Telgote Stuti , and Ramteke W. Pramod
Green cloud computing: reduced overload with a better architecture
Dhusia Kalyani, Rizvi Z. Ahsan, Firoz Neda, Telgote Stuti , and Ramteke W. Pramod
Software systems architects had been continually facing challenges of scaling up software systems architectures. Scaling up the architectures to meet these needs certainly introduces additional energy cost. Improvement of applications, algorithms, energy efficient hardware has been done to achieve energy efficiency. Reducing the energy demands in such architectures is always challenging. The greatest environmental challenge today is e-wastes and energy crisis, bringing green computing in the limelight. Green computing requires algorithms and mechanisms to be redesigned for energy efficiency. In accordance to the state of art for distributed software architectures, are not aware on green cloud computing while the implementation needs, changing the whole infrastructure cost effectively.
Software architectures don’t provide the primitives for reasoning and managing power consumption. In present work, an energy efficient resource management system for virtualized Cloud data centers that reduces operational costs and provides required Quality of Service (QoS) has been proposed and fulfilled. Our proposal towards software engineering demands to be green aware, where the software engineering and design activities should not only be judged by their technical merits, but also by their contributions to energy savings. In particular, the software system architecture seems to be adequate to address green-aware concerns but need revival. Software architectures should be green-aware, providing power management mechanisms as part of the architecture. The results of present work improvised that the proposed technique brings substantial energy savings, while ensuring reliable QoS. This justifies further investigation and development of the proposed resource management system.
An E-commerce Feedback Comment Mining Using SentiWordNet Tool and K-Means Clustering Method
Tinku Varghese, Subha Sreekumar
Reputation based trust models are widely used in e-commerce applications. The feedback comments are used to compute sellers’ reputation trust score. This system is based on the observation that the buyers can express their opinions openly and honestly in free text feedback comments. If there are no feedback comments available then the buyer need to consider the features specified for each product. The tool SentiWordNet is used for the extraction of feedback comments into positive, negative and neutral. K-means clustering method is used to group the data obtained after sentimental analysis. Each sentence in a feedback comment is considered as a document. This calculation is lead to obtain a sellers trust profile. The problem faced by all the reputation system is an all good reputation problem where reputation score are universally high for sellers and this will be difficult for a potential buyer to identify the potential buyer. This system will provide a solution for this system.
C Source Code Auto Review Tool for Secure Programming of Indian Spacecraft Ground Software Elements, GEOSCHEMACS
K.V.Maruthi Prasad, J.Krishna Kishore
Secure software development involves practicing secure programming constructs and standards. To enable the software is developed with secure programming standards, secure static analysis play major role in finding out the hidden vulnerabilities in the source code. Secure static analysis tools yield significant reduction in runtime failures of the software. GEOSCHEMACS (Geostationary Earth Orbit SpaceCraft HEalth Monitoring, Analysis and Control Software) is Indian Spacecrafts primary ground software element and developed mostly based on C programming language. The aim of this paper was to discuss the development of software for auto reviewing of C source code. The paper highlights the necessity of a customized & proprietary tool in complying against CERT (Computer Emergency Response Team) C secure coding standard and enhancing the secure software development of GEOSCHEMACS. It brings out the design and test results of the tool.
Omed Hassan Ahmed, Aram Mahmood Ahmed, Sarkar Hasan Ahmed
Improving Playfair Algorithm To Support User Verification And All The Languages In The World Including Kurdish Language
Omed Hassan Ahmed, Aram Mahmood Ahmed, Sarkar Hasan Ahmed
communicating over any untrusted medium or network requires privacy and authentication; cryptography plays an important role in this area. Original Playfair cipher is one of the early cryptographic algorithms that uses a 5x5 matrix. It supports English language merely and can be cracked easily. It has being improved in different aspects. This paper, firstly, improves the algorithm to support the Kurdish language script. This is achieved by using a 256X256 matrix that meanwhile increases the security of the algorithm dramatically. Secondly, the proposed algorithm is coupled with a hash function (sha512) to achieve user verification. For this a software is designed and implemented based on software development concepts.
This paper presents recent achievements and original solutions towards increased availability, quality and interoperability of LCID (Life Cycle Inventory Data), developed through European Commission-led activities and based on wide stakeholder consultation and international dialogue. An overview of related activities, such as the ILCD (International Reference Life Cycle Data System), the ELCD (European Reference Life Cycle Database) and the ILCD Entry-Level quality requirements are presented. The focus is then on the LCDN (Life Cycle Data Network).Purpose The European Commission’s Integrated Product Policy Communication, 2003, defined LCA (Life Cycle Assessment) as the ‘best framework for assessing the potential environmental impacts of products’. Since then, the use of LCA and life cycle approaches has been developing in a wide range of European policies, and its use has also significantly grown in business. Increasing the availability of quality-assured LCI (Life Cycle Inventory) data is the current challenge to ensure the development of LCA in various areas. Methods One solution to increase availability is to use LCI data from multiple database sources but under the condition that such LCI data are fully interoperable. A non- centralised data network of LCI datasets complying with minimum quality requirements that was politically launched in 2ND month of 2014, already includes several database nodes from different worldwide sources and has the potential to contribute to the needs of the international community.
Robust Utkin’s Observer Based Controller for Deregulated Hydro-Thermal LFC Problem
L.Shanmukha Rao, N.V.Ramana
This paper presents a new nonlinear observer based control strategy popularly known as Utkins observer based controller has been proposed to solve load frequency control of a multi area Hydro-Thermal deregulated system. The proposed control strategy has inherent capabilities to tackle certain nonlinearities and uncertainties of the highly non linear power system. The performance of Utkins observer based controller is tested on two area Hydro-Thermal deregulated power system including bilateral contacts when system is subjected to sudden disturbance. The dynamic response of the system with proposed Utkins observer based controller is compared with dynamic response of the system with Conventional PI controller.