Integration of GIS and Cloud Computing for Emergency System
Eman Mahmoud, Osman Hegazy, Mohamed Nour El-Dien,
latest evolutions in GIS technology and spatial information acquisition technology have led to a more distributed data method and computing environment. “Cloud” expertise has appeared as technologies by its focus on large-scale asset sharing and reduced cost for large-scale data storage technology. Specially, the problems of big volume of spatial data in some of the emergency cases which make the needs for elastic way for storing and analysis and computing all of these resources. In this paper we proposed an architecture for spatial data processing based on cloud computing technology. The results showed better performance in comparison to previous works
:THE POWER CONSTRAINT AND REMEDIAL METHOD IN DESIGN OF VARIATION TRAINED DROWSY CACHE(VTD- CACHE) IN VLSI SYSTEM DESIGN
Sura Sreenivasulu, M.Mahaboob Basha,
Power is arguably the critical resource in VLSI system design today. In this paper a brief review is discussed about drowsy cache & also about the “Variation Trained Drowsy Cache” (VTD-Cache) architecture. As process technology scales down, leakage power consumption becomes comparable to dynamic power consumption. The drowsy cache technique is known as one of the most popular techniques for reducing the leakage power consumption in the data cache. However, the drowsy cache is reported to degrade the processor performance significantly. In this paper VTD-Cache allows for a significant reduction of around 50% in power consumption while addressing reliability issues raised by memory cell process variability. By managing voltage scaling at a very fine granularity, each cache way can be sourced at a different voltage where the selection of voltage levels depends on both the vulnerability of the memory cells in that cache way to process variation and the likelihood of access to that cache location. The novel and modular architecture of the VTD-Cache and its associated controller makes it easy to be implemented in memory compilers with a small area and power overhead. This total process is studied with different diagrams ,schematics using Xilinx 14.5 software.
Deadlock Detection and Removalin Distributed Systems
Nidhi Sharma, Amol Parikh,
Advancement in information technology has been tremendous in the recent years. Distributed computing concepts are being applied to the database systems as well, leading to the development of distributed databases.[5] A distributed database is a collection of multiple logically interrelated databases distributed over a computer network. With the making of distributed databases rose the problem of deadlocks, concurrency and management of databases. In an operating system, a deadlock is a situation which occurs when a process enters a waiting state because a resource requested by it is being held by another waiting process, which in turn is waiting for another resource.[4] If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock.Concurrency refers to the process of simultaneous accesses to the database by various processes. One needs to synchronize the accesses to ensure consistency and avoid deadlocks.[4] In this paper, we discuss the various deadlock detection techniques and explain a new approach developed to detect and solve deadlocks.
New technique for improving recognize letters E-set
saeed vandaki, saman zahiri rad, naser mehrshad,
In any language, Spoken alphabet recognition as one of the subsets of speech recognition and pattern recognition has many applications. The purpose of audio signal processing, they are classified. Speech recognition is one of the issues in computer science and artificial intelligence, which seeks to identify a person based on the person's voice. Alphabet Recognition Speech recognition is below the branches. The different methods of feature extraction and classification, in this paper, a method combining these algorithms are trying to improve the English alphabet recognition. We are also leading to problems such as these problems can be noted that E-set, this collection contains the letters B, C, D, E, G, P, T, V and Z. The problem is similar to the set of waves vocal alphabet E that makes it difficult to recognize in all this is set in this paper by using MFCC feature extraction and SVM classification methods to achieve our desired results. In this paper, a method is said to have achieved 80% accuracy on data-set TI ALPHA.
Abdulkareem Ademola, Adeyinka Adewale, Dike U. Ike,
Design and Development of a University Portal for the Management of Final Year Undergraduate Projects
Abdulkareem Ademola, Adeyinka Adewale, Dike U. Ike,
Processes associated with undergraduate final year projects have always been a manual process which requires a lot of paperwork and could sometimes be a cumbersome and tiring task for the personnel in charge. The manual process sometimes leads to time wasting, impeding of project work because the student carrying out the project work is not able to update the lecturer on the level of execution of the project. Also due to unavailability of a content management system or repository, duplicity of previously carried out final year projects is experienced. It could be sensed by the project supervisors or the personnel in charge that this particular project has been done but where is the proof? Where is the system that out rightly bounces the topic back when the student puts it forward or bring forth a list of projects that has keywords present in the chosen project topic? This project work therefore, eliminates or reduces the error of allowing a student to carry out a project that has been done before as well as cutting down on the cost and time required by the student to produce a quality technical report. It also helps to prevent the forgery of signatures usually experienced during the final clearance stage of the students after the conclusion of the project work. During the clearance stages, the completed stages will be noted by the computer until the final stage of the clearance stage is completed and the print button can be clicked upon by the student to bring forth the completed clearance form. In this work, we developed an intranet portal platform that can integrate all the processes above into one system
In computer science, an ambiguous grammar is a formal grammar for which there exists a string that can have more than one leftmost derivation, while an unambiguous grammar is a formal grammar for which every valid string has a unique leftmost derivation. For real-world programming languages, the reference CFG is often ambiguous, due to issues such as the dangling else problem. If present, these ambiguities are generally resolute by adding precedence rules or other context-sensitive parsing rules, so the overall phrase grammar is unambiguous. It has been known since 1962 that the ambiguity problem for context-free grammars is undesirables. Ambiguity in context-free grammars is a frequent problem in language design and parser invention, as well as in applications where grammars are used as models of real-world physical structures. We observe that there is a simple linguistic categorization of the grammar ambiguity problem, and we show how to develop this to conservatively approximate the problem based on local regular approximations and grammar unfolding. As an application, we consider grammars that occur in RNA analysis in bioinformatics, and we demonstrate that our static analysis of context-free grammars is sufficiently precise and efficient to be sensibly useful.
Artificial neural networks are algorithms that can be used to perform nonlinear statistical modeling and provide a new alternative to logistic regression, the most commonly used method for developing predictive models for dichotomous outcomes in medicine. Neural networks offer a number of advantages, including requiring less formal statistical training, ability to implicitly detect complex nonlinear relationships between dependent and independent variables, ability to detect all possible interactions between predictor variables, and the availability of multiple training algorithms. Disadvantages include its "black box" nature, greater computational burden, proneness to overfitting, and the empirical nature of model development. An overview of the features of neural networks and logistic regression is presented, and the advantages and disadvantages of using this modeling technique are discussed.
This paper proposes an adaptive threshold estimation method for image denoising in the wavelet domain based on the generalized Guassian distribution(GGD) modeling of subband coefficients. The proposed method called NormalShrink is computationally more efficient and adaptive because the parameters required for estimating the threshold depend on subband data .The threshold is computed by βσ 2 / σy Where σ and σy are the standard deviationof the noise and the subband data of noisy image respectively . β is the scale parameter ,which depends upon the subband size and number of decompositions . Experimental results on several test image are compared with various denoising techniques like wiener Filtering [2], BayesShrink [3] and SureShrink [4]. To benchmark against the best possible performance of a threshold estimate , the comparison also include Oracleshrink .Experimental results show that the proposed threshold removes noise significantly and remains within 4% of OracleShrink and outperforms SureShrink, BayesShrink and Wiener filtering most of the time.
Empowering Auditability of Public and Data Dynamics for Depot Protection in Cloud
Prof.Santosh Kumar. B, Vijaykumar .L,
Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public Auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.
The use of cache memories are so persistent in today’s computer systems it is difficult to imagine processors without them. Cache memories, along with virtual memories and processor registers form a field of memory hierarchies that rely on the principle of locality of reference. Most applications exhibit temporal and spatial localities among instructions and data. Spatial locality implies that memory locations that are spatially near the currently referenced address will likely be referenced. Temporal locality implies that the currently referenced address will likely be referenced in the near future. Memory hierarchies are intended to keep most likely referenced items in the fastest devices. This results in an effective reduction in access time.
An analytical study on conic volume technique vs bounding box technique in picking object in Non Immersive Virtual world
K.Merriliance, Dr.M.Mohamed Sathik,
In this paper we present an analytical study to select 3D objects based on their shape pointing device in a virtual environment (VE). We adapt a 2D picking metaphor to 3D selection in Virtual environment’s by changing the projection and view matrices according to the position and orientation of a pointing device and rendering a conic selection volume to an off-screen pixel buffer. This method works for triangulated as well as volume rendered objects, no explicit geometric representation is required. In this paper I focused the advantages of conic volume technique over bounding box technique in picking object in Non immersive Virtual world. The usefulness and effectiveness of the proposed evaluation measures are shown by reporting the performance evaluation of two algorithms. We then compare the application of both techniques with related work to demonstrate that they are more suitable
This paper presents a distributed file system that is a client/server -based application that allows clients to access and process data stored on the server as if it were on their own computer. The purpose of a distributed file system (DFS) is to allow users of physically distributed computers to share data and storage resources by using a common file system. It consists of an introduction of DFS, features of DFS, its background, its concepts, design goals and consideration. It also emphasized on Classical file system: Sun network file system and Andrew file system and a brief study of Google file system.
Simulation of a Zero-Voltage-Switching and Zero-Current-Switching Interleaved Boost And Buck Converter
MD.Imran, K. Chandra mouli,
A novel interleaved boost and buck converter with zero-voltage switching (ZVS) and zero-current switching (ZCS) characteristic is proposed in this paper. By using the interleaved approach, this topology not only decreases the current stress of the main circuit device but also reduces the ripple of the input current and output voltage. Moreover, by establishing the common soft-switching module, the soft-switching interleaved converter can greatly reduce the size and cost. The main switches can achieve the characteristics of ZVS and ZCS simultaneously to reduce the switching loss and improve the efficiency with a wide range of load. This topology has two operational conditions depending on the situation of the duty cycle. The operational principle, theoretical analysis, and design method of the proposed converter are presented. Finally, simulations results are used to verify the feasibility and exactness of the proposed converter.
ENTROPHY BASED NETWORK CODING MULTIPATH ROUTING IN WIRELESS SENSOR NETWORK
G.Dhivya, S.Ranjitha Kumari,
Unlike traditional routing schemes that route all traffic along the same path, multipath routing method split the traffic among several paths in order to ease congestion. It has been recognized by many that multipath routing can be fundamentally more efficient than the traditional approach of routing along single paths. Yet, in contrast to the single path routing approach, many research in the context of multipath routing focused on heuristic methods. We demonstrate the significant advantage of optimal (or near optimal) solutions. Hence, we investigate multipath routing. A wireless sensor network is a large collection of sensor nodes with limited power supply and constrained computational capability. Because of the restricted communication range and high density of sensor nodes, packet forwarding in sensor networks is usually performed through multi-hop data transmission. We present a comprehensive taxonomy on the existing multipath routing protocol called network coding based multipath routing, which is especially designed for wireless sensor network and provide an enhancement to that algorithm that gives better packet delivery, packet delivery delay, less packet loss probability and network life time.
AVA DATABASE CONNECTIVITY (JDBC) - DATA ACCESS TECHNOLOGY
Aditi Khazanchi, Akshay Kanwar, Lovenish Saluja,
A serious problem facing many organizations today is the need to use information from multiple data sources that have been developed separately. To solve this problem, Java Database Connectivity came into existence. JDBC helps us to connect to a database and execute SQL statements against a database. JDBC API provides set of interfaces and there are different implementations respective to different databases. This paper emphasis on its history and implementation, architecture and JDBC drivers.
Modeling and Data testing for Indian Universities Clusters
Srinatha Karur1 , Prof. M.V. Raman Murthy,
This paper gives Modeling and Data testing on Indian Universities Clustering with respect to Statistical and Mathematical methods. In next phase of action we can implement with different Data mining tools for observe the difference between them. Using different modeling techniques we can easily find out the outliers in the data. In this paper we discussed only on numeric data very purely and not allowed any other type of data. Authors in this paper consider the two types of data set one is given set and another is 50% random set from given data. Authors already published about the data preparation in their published paper [51], pp 31-32.
One of the most important administrative developments in the developed as well as in developed countries has been the commencement and development of a large numbers of new projects in every field like agriculture, irrigation, industry, community, development, health and social welfare etc. The principle aims and objectives of all these programs have been to bring about overall changes in the exciting socio-economic structure in the country providing thereby dignified way of life to a citizen as a unit and socio economic upliftment of the world. So most of the administrators are directly concerned with the project administration than former activities. The potential of administrative system to formulate and implement, relevant and in viable project effectively constitutes a crucial element in the process of development. Development requires planning and planning includes a lot of projects.
PRIVACY PRESERVATION OF DATA PUBLISHING BASED ON NOISE ENABLED SLICING APPROACH
N.Sathya, C.Grace Padma,
The basic idea of slicing is to break the association cross columns, but to preserve the association within each column. This reduces the dimensionality of the data and preserves better utility than generalization and bucketization. Slicing preserves utility because it groups highly correlated attributes together, and preserves the correlations between such attributes. Proposed system efficient slicing algorithm to achieve -diverse slicing. Given a microdata table T and two parameters c and , the algorithm computes the sliced table that consists of c columns and satisfies the privacy requirement of -diversity. For measuring the correlation coefficient using pearson and chi squared correlation coefficient in attribute partitioning step for -diversity slicing. Slicing protects privacy because it breaks the associations between uncorrelated attributes, which are infrequent and thus identifying. Proposed system work in the following manner: attribute partitioning, attribute clustering, tuple partitioning and Analyzing the slicing using Noise enabled slicing. In first step for performing the attribute partitioning ,First compute the correlations between pairs of attributes and sensitive attributes on their correlations using the Chi squared and Pearson based correlation coefficient and then cluster attributes based on their correlations using the Chi squared and Pearson based correlation coefficient .It improves the accuracy of the system for partitioning the result, After these steps finished we perform ,By evaluation of the result by adding the noise data to sensitive attributes for both Chi squared and Pearson based L-diversity slicing. Experimental results shows that the proposed system improves the data utility and privacythentheexistingslicingmethods
New topologies for Transformerless Single-Stage AC/DC Converter
K. Srinivas, M.V.Praveen reddy,
This paper presents a high step-down tranformerless single-stage single-switch ac/dc converter suitable for universal line applications (90–270 Vrms ). The topology integrates a buck-type power-factor correction (PFC) cell with a buck–boost dc/dc cell and part of the input power is coupled to the output directly after the first power processing. With this direct power transfer feature and sharing capacitor voltages, the converter is able to achieve efficient power conversion, high power factor, low voltage stress on intermediate bus (less than 130 V) and low output voltage without a high step-down transformer. The absence of transformer reduces the component counts and cost of the converter. Unlike most of the boost-type PFC cell, the main switch of the proposed converter only handles the peak inductor current of dc/dc cell rather than the superposition of both inductor currents. Detailed analysis and design procedures of the proposed circuit are given and verified by experimental results.
Computing is rapidly moving away from traditional computers. Programs in the future will run on collections of mobile processors that interact with the physical world and communicate over ad hoc networks. We can view such collections as swarms. As with natural swarms, such as a beehive or ant colony, the behavior of a computational swarm emerges from the behaviors of its individual members. This research paper focuses on swarm techniques for creating, understanding and validating properties of programs that execute on swarms of computing devices.The paper strives on identifying and understanding about swarm practices in principled ways and investigates the technique based on both experimental and analytical approaches.
A ZVS Interleaved Power factor correction based Boost, Half Bridge And Full bridge AC/DC Converter Used in Plug-in Electric Vehicles
V. Ravi kumar, K.Prasad Yadav,
This paper presents a novel, yet simple zero-voltage switching (ZVS) interleaved boost power factor correction (PFC) ac/dc converter used to charge the traction battery of an electric vehicle from the utility mains. The proposed opology consists of a passive auxiliary circuit, placed between two phases of the interleaved front-end boost PFC converter, which provides enough current to charge and discharge the MOSFETs’ output capacitors during turn-ON times. Therefore, the MOSFETs are turned ON at zero voltage. The proposed converter maintains ZVS for the universal input voltage (85 to 265 Vrms ), which includes a very wide range of duty ratios (0.07–1). In addition, the control system optimizes the amount of reactive current required to guarantee ZVS during the line cycle for different load conditions. This optimization is crucial in this application since the converter may work at very light loads for a long period of time. Experimental results from a 3 kW ac/dc converter are presented in the paper to evaluate the performance of the proposed converter. The results show a considerable increase in efficiency and superior performance of the proposed converter compared to the conventional hard-switched interleaved boost PFC converter
Hybrid Selective Harmonic Elimination PWM for Common-ModeVoltage Reduction in Three-Level Neutral-Point-ClampedInverters for Variable Speed Induction Drives
Venu Madhav, K. Ramesh,
This paper proposes a hybrid selective harmonic elimination pulsewidth modulation (SHEPWM) scheme for commonmode voltage reduction in three-level neutral-point-clamped inverter-based induction motor drives. The scheme uses the conventional SHEPWM (C-SHEPWM) to control the inverter at high frequency (≥ 0.9 motor rated frequency) and uses the modified SHEPWM (M-SHEPWM) to control the inverter at low frequency. It also uses a scheme to ensure the smooth transition between the two SHEPWM schemes. As a result, at high frequency, the C-SHEPWM provides the required high modulation index for the motor, while at low frequency, when a passive filter is less effective for common-mode voltage reduction, the MSHEPWM is used to suppress the common-mode voltage. Experimental results show that the proposed hybrid SHEPWM scheme could meet the modulation index need of the motor and reduce the common-mode voltage in the drive, and the two SHEPWM schemes could transition smoothly.
VERIFICATION of DATA INTEGRITY in CO-OPERATIVE MULTICLOUD STORAGE
T. Harshavardhan Raju, Ch.Subha Rao, P. Babu ,
The integrity of data is ensured by the technique called Provable Data Possession (PDP) in storage outsourcing. In our paper, we deal with the construction of PDP scheme which is efficient for distributed cloud storage. This supports the scalability of service and data migration where we consider that the clients’ data is stored and maintained by the multiple cloud service providers. We introduced Cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. The security of our scheme is proved based on multi-prover zero-knowledge proof system. This system satisfies the properties like completeness, knowledge soundness, and zeroknowledge. Besides we study the performance optimization mechanisms for our scheme, and particularly we present a method, to minimize the computation costs of clients and storage service providers, which is efficient for selecting optimal parameter values. We show with our experiments that our solution introduces lower computation and communication overheads in comparison with non-cooperative approaches.
Web Gate Keeper: Detecting Encroachment in Multi-tier Web Application
Sanaz Jafari , Prof.Dr.Suhas H. Patil (GUIDE),
All different types of internet services which are very popular these days in the life of human, now a day’s every person’s life become nothing without internet services. For fulfilling the user requirements this work on both the ends like front end & back end. On front end user can access to the respective internet services & o back end according to the user requirement the data which is v\useful to the user is provided. This strategy is mainly focus on to detect intrusion in multi-tier web applications. Multi-tier web application include two ends that is front end as well as back end of the applications. The front end include web server which can responsible to run the application and gives that output to back end i.e. file server. This strategy is useful to identify the intrusion at both front end and back end of web application. It is used to monitor the behavior across front end web server and back end database server or file server using IDS. We will also able to detect intrusion in static and dynamic web application. IDS having maximum accuracy and is mainly responsible to identify intrusion.
Generating Meta Alert with Intrusion Framework on IDS
Sivaramaiah.Y , S.Nagarjuna Reddy,
Alert aggregation is an important subtask of intrusion detection. The goal is to identify and to cluster different alerts—produced by low-level intrusion detection systems, firewalls, etc. belonging to a specific attack instance which has been initiated by an attacker at a certain point in time. Thus, meta-alerts can be generated for the clusters that contain all the relevant information whereas the amount of data (i.e., alerts) can be reduced substantially. Meta-alerts may then be the basis for reporting to security experts or for communication within a distributed intrusion detection system. We propose a novel technique for online alert aggregation which is based on a dynamic, probabilistic model of the current attack situation. Basically, it can be regarded as a data stream version of a maximum likelihood approach for the estimation of the model parameters. With three benchmark data sets, we demonstrate that it is possible to achieve reduction rates of up to 99.96 percent while the number of missing meta-alerts is extremely low. In using simulation of mobile device intrusion will attack to just display on the device. In addition, meta-alerts are generated with a delay of typically only a few seconds after observing the first alert belonging to a new attack instance
Mobile Commerce is an evolving area of e-Commerce, where users can interact with the service providers through a mobile and wireless network, using mobile devices for information retrieval and transaction processing. M-Commerce services and applications can be adopted through different wireless and mobile networks, with the aid of several mobile devices. However, constraints of both mobile networks and devices influence their operational performance; therefore, there is a strong need for taking into consideration those constraints in the design and development phases of m-Commerce services and applications. Another important factor in designing m-Commerce services and applications is the identification of mobile users requirements. Furthermore, m-Commerce services and applications need to be classified based on the functionality they provide to the mobile users. This kind of classification results in two major classes: the directory and the transaction-oriented services and applications. This paper suggests a new approach for designing and developing m-Commerce services and applications. This approach relies on mobile users needs and requirements, the classification of the m-Commerce services and applications, as well as the current technologies for mobile and wireless computing and their constraints..
Color Extended Visual Cryptography Using Error Diffusion With VIP Synchronization
Grishma R. Bhokare, Prof. C. J. Shelke
Visual cryptography (VC) schemes encrypt a secret image into two or more cover images, general access structures for grayscale called shares. The secret image can be reconstructed by stacking the shares together, In extended visual cryptography, the share images are constructed to contain meaningful cover images, thereby providing opportunities for integrating visual cryptography and biometric security techniques. Visual cryptography is a secret sharing scheme which uses images distributed as shares such that, when the shares are superimposed, a hidden secret image is revealed. Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality.
The aim of this paper is to establish an analytical method for the design of the Improved low pass Broadband Passive harmonic Filter (IBF) that absorbs current harmonics caused by three phase bridge rectifiers used in motor drives. The design attempts to comply with the IEEE Standard 519-1992 recommended harmonic limits applied to the current harmonic limits of three phase rectifier systems. In this paper an analytical design method of the improved broadband passive harmonic filter (IBF) for three phase diode rectifier front-end type adjustable speed drives is presented. The method is based on frequency domain modeling of the rectifier and filter. The success of the method involves accurate representation of the load harmonics. With the harmonics well defined, the harmonic and fundamental frequency equivalent circuits are utilized to analytically calculate the voltages/currents. Thus, the size and the performance of the filter can be optimized. The analytical method is verified via computer simulations and laboratory experiments. Also a performance comparison of various passive harmonic filters for three-phase diode rectifier front-end type adjustable speed drives is provided. The comparison involves the input current total harmonic distortion, input power factor, rectifier voltage regulation, energy efficiency, size, and cost. The parallel/series harmonic resonance problem related issues are addressed and unbalanced operation performance investigated. The comparison is based on analysis and computer simulations and the results are validated by laboratory experiments.
Automatic Land Marking on Lateral Cephalogram Using Single Fixed View Appearance Model for Gender Identification
Janaki Sivakumar , Prof.K.Thangavel,
This manuscript describes Active Appearance model for automatic Cephalometric analysis in forensic science, which is concerned with the recognition, identification, individualization, and evaluation of physical evidence. A complete model of shape and texture was built from a dataset of manually annotated images, and then tested with unseen images. To apply AAM Method for identifying landmark points 140 images were collected from AAOF out of it 67 were female and 73 were male with age ranging from 15 years to 45 years. By using the AAM algorithm, the mean shape is extracted and the appearance variation collected by establishing a piece-wise affine warp between each image of the training set and the mean shape
Optimal Design of Direct Adaptive Fuzzy Control Scheme for a Class of Uncertain Nonlinear Systems Using Firefly Algorithm
Omid N. Almasi, Mina Kavousi, Assef Zare,
In this paper, an optimal direct adaptive fuzzy controller is designed for a class of uncertain nonlinear continuous systems through the following three steps: first, some fuzzy sets whose membership functions cover the state space are defined; then, firefly algorithm is used as a novel nature inspired optimization approach to construct an initial adaptive fuzzy controller in which some parameters are free to change. In other words, the control knowledge (fuzzy IF-THEN rules) is incorporated into the fuzzy controller through the setting of its initial parameters and simultaneously determining a suitable adaptation parameter by using the FA; finally, an adaptive law is developed to adjust the free parameters based on a Lyapunov synthesis method. It is confirmed that i) the closed-loop system using this optimal adaptive fuzzy controller is globally stable in the sense that all signals involved are bounded and ii) the tracking error converges to zero asymptotically. Finally, the proposed control scheme applies to the two well-known examples in the nonlinear control problems. Moreover, the two different methods that are non-optimal Direct Adaptive Fuzzy (DAF) control, and DAF based on particle swarm optimization method are also implemented for comparison. The results demonstrate the effectiveness of the proposed optimal direct adaptive fuzzy control methodology.
Disruption Tolerant Networks (DTNs) utilize the mobility of nodes and the opportunistic contacts among n o d e s for data communications. Due to the limitation in network resources such as contact opportunity and buffer space, DTNs are vulnerable to flood attacks in which attackers send as many packets or packet replicas as possible to the network, in order to deplete or overuse the limited network resources. In this paper, we employ rate limiting to defend against flood attacks in DTNs, such that each node has a limit over the number of packets that it can generate in each time interval and a limit over the number of replicas that it can generate for each packet. We propose a distributed scheme to detect if a node has violated its rate limits. To address the challenge that it is difficult to count all the packets or replicas sent by a node due to lack of communication infrastructure, our detection adopts claim-carry-and- check: each node itself counts the number of packets or replicas that it has sent and claims the count to other nodes; the receiving nodes carry the claims when they move, and cross-check if their carried claims are inconsistent when they contact. The claim structure uses the pigeonhole principle to guarantee that an attacker will make inconsistent claims which may lead to detection. We provide rigorous analysis on the probability of detection, and evaluate the effectiveness and efficiency of our scheme with extensive trace- driven simulations.
V K Pathak, Kamal Mehta, Seema Sahu Riteshwari Chaturvedi
Profit Analysis of a System Having One Main Unit And Two Supporting Units
V K Pathak, Kamal Mehta, Seema Sahu Riteshwari Chaturvedi
Reliability for functioning units is essential, be it on the parts of memory chips of computers, software or hardware or it can be on the part of heavy machinery in any industrial system. Reliability analysis can give excellent results to improve the maintainability and portability of management design for existing and future product. Extensive reviews of two-component repairable system models have been presented by Lie et al(1977) and Yearout et al. (1986). In all these models, it is assumed that failure times and repair times of the components are independent. In this paper, Reliability analysis of a system having one main unit and two supporting units is proposed, assuming that the system fails whenever the main unit fails and system shuts down whenever either both the supporting units fail or main unit and one of the supporting units fail. To improve the reliability of the system, concept of preventive maintenance is also added. Using regenerative point technique various system parameters such as Transition Probabilities, Mean sojourn times, Mean time to system failure, Availability, Busy period of repairman in repairing the failed units etc. are calculated. At last profit analysis is also done. In this paper, failure time distributions are taken to be negative exponential whereas the repair time distributions are arbitrary.