Showing posts with label bulk ieee projects. Show all posts
Showing posts with label bulk ieee projects. Show all posts

Sunday, 13 July 2014

BULK IEEE PROJECTS 2014 - 2015

Dear Client,
 Greetings from LansA Informatics Pvt Ltd! We would like to let you know we have launched a training division exclusively for CSE & IT Department students. In the mean time we have also launched a separate IEEE / NON-IEEE Project development division only for CSE & IT Students. Our moto is to provide the quality of education which would lead the upcoming generation more strong in the technology. 

HIGHLIGHTS:

      1. We are happy to let you know that we offer only projects related to CSE / IT.
      2. Projects developed by Experts with rich experience in technology
      3. We have been recognized by leading news papers such as "INDIAN EXPRESS",     
          "DINAKARAN", and “DHINAMALAR".
      4. Organized various workshops and conferences in leading colleges like
          COIMBATORE INSTITUTE OF TECHNOLOGY
          ARJUN COLLEGE OF TECHNOLOGY
      5. we trained 5000+ students so far.

PROJECTS WILL BE PROVIDED ON VARIOUS DOMAINS:
  • Cloud computing
  • Networking
  • Wireless Communications
  • Mobile Computing
  • Network Security
  • Secure & Dependable Computing
  • Data Mining / Knowledge and Data Engineering
  • Image processing
  • Parallel & Distributed Systems
  • Information Security      
TECHNOLOGIES WE USED TO DEVELOP PROJECTS:
  • Dotnet
  • Java/J2EE/J2ME
  • Android
  • Matlab
PROJECT DELIVERABLE / SUPPORTS FOR CLIENTS:

  • Project IEEE Base paper
  • Project reference paper
  • Project abstract
  • Project complete documentation
  • Project presentation in PPT (4 reviews)
  • project source code
  • project demo video / how to run help file
  • Software packages
  • Team viewer support to execute the projects.

FOR MORE DETAILS CONTACT US:

Papitha Velumani, 
Managing Director,
LansA Informatics Pvt Ltd
No 165, 5th Street,
Crosscut road, Gandhipuram,
Coimbatore - 641 012

Landline: 0422 - 4204373
Mobile: +91 90 953 953 33

Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage



IDENTITY-BASED DISTRIBUTED PROVABLE DATA
POSSESSION IN MULTI-CLOUD STORAGE

ABSTRACT:

Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multi-cloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier’s cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multi-cloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie- Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client’s authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification and public verification.
EXISTING SYSTEM:
The foundations of cloud computing lie in the outsourcing of computing tasks to the third party. It entails the security risks in terms of confidentiality, integrity and availability of data and service. The issue to convince the cloud clients that their data are kept intact is especially vital since the clients do not store these data locally. Remote data integrity checking is a primitive to address this issue. For the general case, when the client stores his data on multi-cloud servers, the distributed storage and integrity checking are indispensable. On the other hand, the integrity checking protocol must be efficient in order to make it suitable for capacity-limited end devices. Thus, based on distributed computation, we will study distributed remote data integrity checking model and present the corresponding concrete protocol in multi-cloud storage.

DISADVANTAGES OF EXISTING SYSTEM:
v Data checking in more complex using multi servers.
v Needed large storage space.
v In sufficient data loss.

PROPOSED SYSTEM:
In identity-based public key cryptography, this paper focuses on distributed provable data possession in multi-cloud storage. The protocol can be made efficient by eliminating the certificate management. We propose the new remote data integrity checking model: ID-DPDP. The system model and security model are formally proposed. Then, based on the bilinear pairings, the concrete ID-DPDP protocol is designed. In the random oracle model, our ID-DPDP protocol is provably secure. On the other hand, our protocol is more flexible besides the high efficiency. Based on the client’s authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification and public verification.

ADVANTAGES OF PROPOSED SYSTEM:
v It has more significant storage space.
v It provides secure public data’s.
v Using Private Key generation.

SYSTEM ARCHITECTURE:










SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-

Processor                  -        Pentium –IV

Speed                        -        1.1 Ghz
RAM                         -        512 MB(min)
Hard Disk                 -        40 GB
Key Board                -        Standard Windows Keyboard
Mouse                       -        Two or Three Button Mouse
Monitor                     -        LCD/LED
SOFTWARE REQUIREMENTS:
Operating system      :         Windows XP.
Coding Language      :         .Net
Data Base                 :         SQL Server 2005
Tool                          :         VISUAL STUDIO 2008.

REFERENCE:
Wang, H., “Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage” IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. PP, NO. 99, 11 March 2014

Monday, 7 July 2014

Probabilistic Consolidation Of Virtual Machines In Self-Organizing Cloud Data Centers



PROBABILISTIC CONSOLIDATION OF VIRTUAL MACHINES IN SELF-ORGANIZING CLOUD DATA CENTERS
 
CLICK HERE TO VIEW THE OUTPUT

ABSTRACT:

Power efficiency is one of the main issues that will drive the design of data centers, especially of those devoted to provide Cloud computing services. In virtualized data centers, consolidation of Virtual Machines (VMs) on the minimum number of physical servers has been recognized as a very efficient approach, as this allows unloaded servers to be switched off or used to accommodate more load, which is clearly a cheaper alternative to buy more resources. The consolidation problem must be solved on multiple dimensions, since in modern data centers CPU is not the only critical resource: depending on the characteristics of the workload other resources, for example, RAM and bandwidth, can become the bottleneck. The problem is so complex that centralized and deterministic solutions are practically useless in large data centers with hundreds or thousands of servers. This paper presents ecoCloud, a selforganizing and adaptive approach for the consolidation of VMs on two resources, namely CPU and RAM. Decisions on the assignment and migration of VMs are driven by probabilistic processes and are based exclusively on local information, which makes the approach very simple to implement. Both a fluid-like mathematical model and experiments on a real data center show that the approach rapidly consolidates the workload, and CPU-bound and RAM-bound VMs are balanced, so that both resources are exploited efficiently.

EXISTING SYSTEM:
In the past few years important results have been achieved in terms of energy consumption reduction, especially by improving the efficiency of cooling and power supplying facilities in data centers. The Power Usage Effectiveness (PUE) index, defined as the ratio of the overall power entering the data center and the power devoted to computing facilities, had typical values between 2 and 3 only a few years ago, while now big Cloud companies have reached values lower than 1.1. However, much space remains for the optimization of the computing facilities themselves. It has been estimated that most of the time servers operate at 10-50 percent of their full capacity [2], [3]. This low utilization is also caused by the intrinsic variability of VMs’ workload: the data center is planned to sustain the peaks of load, while for long periods of time (for example, during nights and weekends), the load is much lower [4], [5]. Since an active but idle server consumes between 50 and 70 percent of the power consumed when it is fully utilized [6], a large amount of energy is used even at low utilization.

DISADVANTAGES OF EXISTING SYSTEM:
·       It is power consuming.
·       Large amount of energy is used even at low utilization.

PROBLEM STATEMENT:
The ever increasing demand for computing resources has led companies and resource providers to build large warehouse-sized data centers, which require a significant amount of power to be operated and hence consume a lot of energy.
SCOPE:
The optimal assignment of VM’s to reduce the power consumption.
PROPOSED SYSTEM:
We presented ecoCloud, an approach for consolidating VMs on a single computing resource, i.e., the CPU. Here, the approach is extended to the multidimension problem, and is presented for the specific case in which VMs are consolidated with respect to two resources: CPU and RAM. With ecoCloud, VMs are consolidated using two types of probabilistic procedures, for the assignment and the migration of VMs. Both procedures aim at increasing the utilization of servers and consolidating the workload dynamically, with the twofold objective of saving electrical costs and respecting the Service Level Agreements stipulated with users. All this is done by demanding the key decisions to single servers, while the data center manager is only requested to properly combine such local decisions. The approach is partly inspired by the ant algorithms used first by Deneubourg et al. [9], and subsequently by a wide research community, to model the behavior of ant colonies and solve many complex distributed problems. The characteristics inherited by such algorithms make ecoCloud novel and different from other solutions. Among such characteristics: 1) the use of the swarm intelligence paradigm, which allows a complex problem to be solved by combining simple operations performed by many autonomous actors (the single servers in our case); 2) the use of probabilistic procedures, inspired by those that model the operations of real ants; and 3) the self-organizing behavior of system, which ensures that the assignment of VMs to servers dynamically adapts to the varying workload.

ADVANTAGES OF PROPOSED SYSTEM:
·       Efficient CPU usage.
·       It reduces power consumption.
·       Efficient resource utilization.

SYSTEM ARCHITECTURE:



 

SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-


ü Processor                  -        Pentium –IV

ü Speed                        -        1.1 Ghz
ü RAM                         -        512 MB(min)
ü Hard Disk                 -        40 GB
ü Key Board                -        Standard Windows Keyboard
ü Mouse                       -        Two or Three Button Mouse
ü Monitor                     -        LCD/LED

SOFTWARE REQUIREMENTS:

         Operating system :         Windows XP
         Coding Language :         Java
         Data Base             :         MySQL
         Tool                     :         Net Beans IDE

REFERENCE:
Carlo Mastroianni, Michela Meo and Giuseppe Papuzzo Probabilistic Consolidation of Virtual Machines in Self-Organizing Cloud Data Centers IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. 1, NO. 2, JULY-DECEMBER 2013.