IEEE 2015 and 2016 dot net projects in pune for BE ME PHD

IEEE 2015 and 2016 dot net projects in pune for BE ME PHD

dot net Projects guidance company in pune for completing your project in quickest time

S3001 - Detection and Rectification of Distorted Fingerprints

Elastic distortion of fingerprints is one of the major causes for false non-match. While this problem affects all fingerprint recognition applications, it is especially dangerous in negative recognition applications, such as watchlist and deduplication applications. In such applications, malicious users may purposely distort their fingerprints to evade identification. In this paper, we proposed novel algorithms to detect and rectify skin distortion based on a single fingerprint image. Distortion detection is viewed as a two-class classification problem, for which the registered ridge orientation map and period map of a fingerprint are used as the feature vector and a SVM classifier is trained to perform the classification task. Distortion rectification (or equivalently distortion field estimation) is viewed as a regression problem, where the input is a distorted fingerprint and the output is the distortion field. To solve this problem, a database (called reference database) of various distorted reference fingerprints and corresponding distortion fields is built in the offline stage, and then in the online stage, the nearest neighbor of the input fingerprint is found in the reference database and the corresponding distortion field is used to transform the input fingerprint into a normal one. Promising results have been obtained on three databases containing many distorted fingerprints, namely FVC2004 DB1, Tsinghua Distorted Fingerprint database, and the NIST SD27 latent fingerprint database.

S3002 - Public Integrity Auditing for Shared Dynamic Cloud Data with Group User Revocation

The advent of the cloud computing makes storage outsourcing becomes a rising trend, which promotes the secure remote data auditing a hot topic that appeared in the research literature. Recently some research considers the problem of secure and efficient public data integrity auditing for shared dynamic data. However, these schemes are still not secure against the collusion of cloud storage server and revoked group users during user revocation in practical cloud storage system. In this paper, we figure out the collusion attack in the exiting scheme and provide an efficient public integrity auditing scheme with secure group user revocation based on vector commitment and verifier-local revocation group signature. We design a concrete scheme based on our scheme definition. Our scheme supports the public checking and efficient user revocation and also some nice properties, such as confidently, efficiency, countability and traceability of secure group user revocation. Finally, the security and experimental analysis show that compared with its relevant schemes our scheme is also secure and efficient.

S3003 - Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage

The capability of selectively sharing encrypted data with different users via public cloud storage may greatly ease security concerns over inadvertent data leaks in the cloud. A key challenge to designing such encryption schemes lies in the efficient management of encryption keys. The desired flexibility of sharing any group of selected documents with any group of users demands different encryption keys to be used for different documents. However, this also implies the necessity of securely distributing to users a large number of keys for both encryption and search, and those users will have to securely store the received keys, and submit an equally large number of keyword trapdoors to the cloud in order to perform search over the shared data. The implied need for secure communication, storage, and complexity clearly renders the approach impractical. In this paper, we address this practical problem, which is largely neglected in the literature, by proposing the novel concept of key aggregate searchable encryption (KASE) and instantiating the concept through a concrete KASE scheme, in which a data owner only needs to distribute a single key to a user for sharing a large number of documents, and the user only needs to submit a single trapdoor to the cloud for querying the shared documents. The security analysis and performance evaluation both confirm that our proposed schemes are provably secure and practically efficient.

S3004 - A Dynamic Secure Group Sharing Framework in Public Cloud Computing

With the popularity of group data sharing in public cloud computing, the privacy and security of group sharing data have become two major issues. The cloud provider cannot be treated as a trusted third party because of its semi-trust nature, and thus the traditional security models cannot be straightforwardly generalized into cloud based group sharing frameworks. In this paper, we propose a novel secure group sharing framework for public cloud, which can effectively take advantage of the cloud servers' help but have no sensitive data being exposed to attackers and the cloud provider. The framework combines proxy signature, enhanced TGDH and proxy re-encryption together into a protocol. By applying the proxy signature technique, the group leader can effectively grant the privilege of group management to one or more chosen group members. The enhanced TGDH scheme enables the group to negotiate and update the group key pairs with the help of cloud servers, which does not require all of the group members been online all the time. By adopting proxy re-encryption, most computationally intensive operations can be delegated to cloud servers without disclosing any private information. Extensive security and performance analysis shows that our proposed scheme is highly efficient and satisfies the security requirements for public cloud based secure group sharing.

S3005 - Joint Congestion Control and Routing Optimization: An Efficient Second-Order Distributed Approach

Distributed joint congestion control and routing optimization has received a significant amount of attention recently. To date, however, most of the existing schemes follow a key idea called the back-pressure algorithm. Despite having many salient features, the first-order sub gradient nature of the back-pressure based schemes results in slow convergence and poor delay performance. To overcome these limitations, in this paper, we make a first attempt at developing a second-order joint congestion control and routing optimization framework that offers utility-optimality, queue-stability, fast convergence, and low delay. Our contributions in this paper are three-fold: i) we propose a new second-order joint congestion control and routing framework based on a primal-dual interior-point approach; ii)we establish utility-optimality and queue-stability of the proposed second-order method; and iii) we show how to implement the proposed second-order method in a distributed fashion.

S3006 - Fuzzy based energy efficient multicast routing for ad-hoc network

Ad-hoc network is an infrastructure less wireless network. Since ad-hoc networks are self-organizing, rapidly deployable wireless networks, they are highly suitable for various applications. Every nodes of ad-hoc network are connected dynamically in an arbitrary manner. There is no default router available in this network because all nodes of this network behave as routers and take part in discovery and maintenance of routes to other nodes. Ad-hoc nodes are powered by batteries with limited capacity due to its distributed nature. Therefore energy consumption occurs due to sending a packet, receiving a packet and to select next hop node. Hence, the present paper proposes a routing protocol, named Fuzzy Based Energy Efficient Multicast Routing (FBEEMR) for ad-hoc network. The basic idea of FBEEMR is to select the best path which reduces energy consumption of ad-hoc nodes based on fuzzy logic. This protocol is mainly used to extend the lifetime of ad-hoc network with respect to energy efficient multicast routing by calculating route lifetime values for each route. Based on the comprehensive simulations and comparative study of same with other existing protocols, it is observed that proposed routing protocol contributes to the performance improvements in terms of energy efficiency.

S3007 - Enabling Cloud Storage Auditing With Key-Exposure Resistance

Cloud storage auditing is viewed as an important service to verify the integrity of the data in public cloud. Current auditing protocols are all based on the assumption that the client's secret key for auditing is absolutely secure. However, such assumption may not always be held, due to the possibly weak sense of security and/or low security settings at the client. If such a secret key for auditing is exposed, most of the current auditing protocols would inevitably become unable to work. In this paper, we focus on this new aspect of cloud storage auditing. We investigate how to reduce the damage of the client's key exposure in cloud storage auditing, and give the first practical solution for this new problem setting. We formalize the definition and the security model of auditing protocol with key exposure resilience and propose such a protocol. In our design, we employ the binary tree structure and the preorder traversal technique to update the secret keys for the client. We also develop a novel authenticator construction to support the forward security and the property of blockless verifiability. The security proof and the performance analysis show that our proposed protocol is secure and efficient.

S3008 - A Computational Dynamic Trust Model for User Authorization

Development of authorization mechanisms for secure information access by a large community of users in an open environment is an important problem in the ever-growing Internet world. In this paper we propose a computational dynamic trust model for user authorization, rooted in findings from social science. Unlike most existing computational trust models, this model distinguishes trusting belief in integrity from that in competence in different contexts and accounts for subjectivity in the evaluation of a particular trustee by different trusters. Simulation studies were conducted to compare the performance of the proposed integrity belief model with other trust models from the literature for different user behavior patterns. Experiments show that the proposed model achieves higher performance than other models especially in predicting the behavior of unstable users.

S3009 - CloudArmor: Supporting Reputation based Trust Management for Cloud Services

Trust management is one of the most challenging issues for the adoption and growth of cloud computing. The highly dynamic, distributed, and non-transparent nature of cloud services introduces several challenging issues such as privacy, security, and availability. Preserving consumers’ privacy is not an easy task due to the sensitive information involved in the interactions between consumers and the trust management service. Protecting cloud services against their malicious users (e.g., such users might give misleading feedback to disadvantage a particular cloud service) is a difficult problem. Guaranteeing the availability of the trust management service is another significant challenge because of the dynamic nature of cloud environments. In this article, we describe the design and implementation of CloudArmor, a reputation-based trust management framework that provides a set of functionalities to deliver Trust as a Service (TaaS), which includes i) a novel protocol to prove the credibility of trust feedbacks and preserve users’ privacy, ii) an adaptive and robust credibility model for measuring the credibility of trust feedbacks to protect cloud services from malicious users and to compare the trustworthiness of cloud services, and iii) an availability model to manage the availability of the decentralized implementation of the trust management service. The feasibility and benefits of our approach have been validated by a prototype and experimental studies using a collection of real-world trust feedbacks on cloud services.

S30010 - Serving the readers of scholarly documents: A grand challenge for the introspective digital library

The scholarly literature produced by human civilization will soon be considered small data, able to be portably conveyed by the network and carried on personal machines. This semi-structured text centric knowledge base is a focus of attention for scholars, as the wealth of facts, facets and connections in scholarly documents are large. Such machine analysis can derive insights that can inform policy makers, academic and industrial management, as well as scholars as authors themselves. There is another underserved community of scholarly document users that has been overlooked: the readers themselves. I call for the community to put more efforts towards supporting our own scholars (especially beginning scholars, new to the research process) with automation from information retrieval and natural language processing. Techniques that mine information from within the full text of a document could be used to introspect a digital library's materials, inferring better search metadata, improving scholarly document recommendation, and aiding the understanding of the text, figures, presentations and citations of our scholarly literature. Such an introspective digital library will enable scholars to assemble an understanding of other scholars' work more efficiently, and provide downstream machine reading applications with input for their analytics.

S30011 - Image inpainting by minimum energy restoration with edge-prioritized filling order

We propose a texture synthesis image inpainting method that minimizes energy function with the filling order which facilitates propagation of edge from the data region to the target region. As a result, the proposed method provides plausible restoration while propagating information of edge for the target region. Experimental results show the validity of the proposed method.

S30012 - On Local Prediction Based Reversible Watermarking

The use of local prediction in difference expansion reversible watermarking provides very good results, but at the cost of computing for each pixel a least square predictor in a square block centered on the pixel. This correspondence investigates the reduction of the mathematical complexity by computing distinct predictors not for pixels, but for groups of pixels. The same predictors are recovered at detection. Experimental results for the case of prediction on the rhombus defined by the four horizontal and vertical neighbors are provided. It is shown that by computing a predictor for a pair of pixels, the computational cost is halved without any loss in performance. A small loss appears for groups of three and four pixels with the advantage of reducing the mathematical complexity to a third and a fourth, respectively.

S30013 - Learning analytics system for assessing students' performance quality and text mining in online communication

A challenging and demanding task for the teachers and researchers in e-learning environments is the assessment of students' performance. This paper is to present a new Learning analytics system for Learning Management Systems (LMS), that will aid and support teachers and researchers to understand and analyze interaction patterns and knowledge construction of the participants involved in ongoing online interactions. It is seamlessly integrated into Moodle. Learning Management Systems (LMS) does not include analytics tool for comprehensive audit logs of students' activities and log analysis capabilities interactions, also lack of good evaluation of participatory level and support for assessment of students' performance quality on LMS. Semantic similarity measures of text play an increasingly important role in text related research and applications in tasks such as text mining, web page retrieval, and dialogue systems. Existing methods for computing sentence similarity have been adopted from approaches used for Messages texts in LMS. The system enables one to measure semantic similarity between texts exchanged during communication sessions, in order to find out the degree of coherence in a discussion tread. It is given as a value of relevance in numerical format.