Site Loader
Rock Street, San Francisco



            The purpose of testing is to
discover errors. Testing is the process of trying to discover every conceivable
fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring
that the

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Software system meets its
requirements and user expectations and does not fail in an unacceptable manner.
There are various types of test. Each test type addresses a specific testing




8.1.1 Unit testing

          Unit testing involves the design of
test cases that validate that the internal program logic is functioning
properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual
software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies
on knowledge of its construction and is invasive. Unit tests perform basic
tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.




8.1.2 Integration testing


             Integration tests are designed to
test integrated software components to determine if they actually run as one
program.  Testing is event driven and is
more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct
and consistent. Integration testing is specifically aimed at   exposing the problems that arise from the
combination of components.



8.1.3 Functional test


        Functional tests provide systematic
demonstrations that functions tested are available as specified by the business
and technical requirements, system documentation, and user manuals.

Functional testing is centered on the
following items:

Valid Input               :  identified classes of valid input must be

Invalid Input             : identified classes of invalid
input must be rejected.

Functions                  : identified functions must be exercised.

Output                    : identified classes of application outputs
must be exercised.

Systems/Procedures: interfacing
systems or procedures must be invoked.


Organization and preparation of functional tests is focused on
requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows; data fields, predefined
processes, and successive processes must be considered for testing. Before
functional testing is complete, additional tests are identified and the
effective value of current tests is determined.


8.1.4 System Test

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results.
An example of system testing is the configuration oriented system integration
test. System testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.


8.1.5 White Box Testing

        White Box Testing is a testing in which
in which the software tester has knowledge of the inner workings, structure and
language of the software, or at least its purpose. It is purpose. It is used to
test areas that cannot be reached from a black box level.


8.1.6 Black Box Testing

        Black Box Testing is testing the
software without any knowledge of the inner workings, structure or language of
the module being tested. Black box tests, as most other kinds of tests, must be
written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a
testing in which the software under test is treated, as a black box .you cannot
“see” into it. The test provides inputs and responds to outputs without
considering how the software works.




6.1 Unit Testing:


testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing
to be conducted as two distinct phases.


Test strategy and approach

          Field testing will be performed
manually and functional tests will be written in detail.



field entries must work properly.

must be activated from the identified link.

entry screen, messages and responses must not be delayed.


to be tested

that the entries are of the correct format

duplicate entries should be allowed

links should take the user to the correct page.


Integration Testing


integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused
by interface defects.

task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.


Test Results: All the test cases mentioned above
passed successfully. No defects encountered.


6.3 Acceptance Testing


Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the
functional requirements.



Test Results: All the test cases mentioned above
passed successfully. No defects encountered.


We present LIME, a model for
accountable data transfer across multiple entities. We define participating
parties, their inter-relationships and give a concrete instantiation for a data
transfer protocol using a novel combination of oblivious transfer, robust
watermarking and digital signatures. We prove its correctness and show that it
is realizable by giving micro benchmarking results. By presenting a general
applicable framework, we introduce accountability as early as in the design
phase of a data transfer infrastructure. Although LIME does not actively
prevent data leakage, it introduces reactive accountability. Thus, it will
deter malicious parties from leaking private documents and will encourage
honest (but careless) parties to provide the required protection for sensitive
data. LIME is flexible as we differentiate between trusted senders (usually
owners) and untrusted senders (usually consumers). In the case of the trusted
sender, a very simple protocol with little overhead is possible. The untrusted
sender requires a more complicated protocol, but the results are not based on
trust assumptions and therefore they should be able to convince a neutral
entity (e.g., a judge).

Our work also motivates further
research on data leakage detection techniques for various document types and
scenarios. For example, it will be an interesting future research direction to
design a verifiable lineage protocol for derived data.




1)  Multiple
re-watermarking scenarios


AUTHORS:  A. Mascher-Kampfer, H. St€ogner, and A. Uhl


use of classical robust watermarking techniques for multiple re-watermarking is
discussed. In particular we focus on a comparison of the usefulness of blind
and non-blind algorithms for this type of applications. A surprisingly high
number of watermarks may be embedded using both approaches, provided that
additional data is recorded in the non-blind case.


2) Data
leakage detection


AUTHORS: P. Papadimitriou and H.


study the following problem: A data distributor has given sensitive data to a
set of supposedly trusted agents (third parties). Some of the data are leaked
and found in an unauthorized place (e.g., on the web or somebody’s laptop). The
distributor must assess the likelihood that the leaked data came from one or
more agents, as opposed to having been independently gathered by other means.
We propose data allocation strategies (across the agents) that improve the
probability of identifying leakages. These methods do not rely on alterations
of the released data (e.g., watermarks). In some cases, we can also inject
“realistic but fake” data records to further improve our chances of detecting
leakage and identifying the guilty party.


Secure spread spectrum watermarking for multimedia


I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon


paper presents a secure (tamper-resistant) algorithm for watermarking images,
and a methodology for digital watermarking that may be generalized to audio, video,
and multimedia data. We advocate that a watermark should be constructed as an
independent and identically distributed (i.i.d.) Gaussian random vector that is
imperceptibly inserted in a spread-spectrum-like fashion into the perceptually
most significant spectral components of the data. We argue that insertion of a
watermark under this regime makes the watermark robust to signal processing
operations (such as lossy compression, filtering, digital-analog and
analog-digital conversion, requantization, etc.), and common geometric
transformations (such as cropping, scaling, translation, and rotation) provided
that the original image is available and that it can be successfully registered
against the transformed watermarked image. In these cases, the watermark
detector unambiguously identifies the owner. Further, the use of Gaussian
noise, ensures strong resilience to multiple-document, or collusional, attacks.
Experimental results are provided to support these claims, along with an
exposition of pending open problems


Asymmetric fingerprinting for larger collusions,


B. Pfitzmann and M. Waidner


schemes deter people from illegal copying of digital data by enabling the
merchant of the data to identify the original buyer of a copy that was
redistributed illegally. All known fingerprinting schemes are symmetric in the
following sense: Both the buyer and the merchant know the fingerprinted copy.
Thus, when the merchant finds this copy somewhere, there is no proof that it
was the buyer who put it there, and not the merchant.

introduce asymmetric fingerprinting. where only the buyer knows the
fingerprinted copy, and the merchant, upon finding it somewhere, can find out
and prove to third parties whose copy it was. We present a detailed definition
of this concept and constructions. The first construction is based on a quite
general symmetric fingerprinting scheme and general cryptographic primitives;
it is provably secure if all these underlying schemes are. We also present more
specific and more efficient constructions.

5) A digital signature scheme secure
against adaptive chosen-message attacks


AUTHORS: S. Goldwasser, S. Micali, and R. L.


present a digital signature scheme based on the computational difficulty of
integer factorization. The scheme possesses the novel property of being robust
against an adaptive chosen-message attack: an adversary who receives signatures
for messages of his choice (where each message may be chosen in a way that
depends on the signatures of previously chosen messages) cannot later forge the
signature of even a single additional message. This may be somewhat surprising,
since in the folklore the properties of having forgery being equivalent to
factoring and being invulnerable to an adaptive chosen-message attack were
considered to be contradictory. More generally, we show how to construct a
signature scheme with such properties based on the existence of a
“claw-free” pair of permutations–a potentially weaker assumption
than the intractibility of integer factorization. The new scheme is potentially
practical: signing and verifying signatures are reasonably fast, and signatures
are compact.

Post Author: admin


I'm Eunice!

Would you like to get a custom essay? How about receiving a customized one?

Check it out