Privacy by Design: Fault-Free Software - A New Possibility for Security and - June 30, 2014

Page created by Everett Pierce
 
CONTINUE READING
Privacy by Design: Fault-Free Software - A New Possibility for Security and - June 30, 2014
A New Possibility for Security and
      Privacy by Design:
     Fault-Free Software

                          June 30, 2014

    Ann Cavoukian, Ph.D.                   Ian Percy
Information and Privacy Commissioner      Brian Smith
          Ontario, Canada              Co-founders, Emendara
Acknowledgements
The authors gratefully acknowledge the contribution of Michelle Chibba, IPC
Director of Policy and Special Projects, and Alex Stoianov, IPC Senior Policy
Specialist, in the preparation of this paper. Thanks also to Fred Carter, Senior Policy
& Technology Advisor and David Weinkauf, Policy & Technology Officer for their
peer review comments.

                                       2 Bloor Street East                              416-326-3333
                                       Suite 1400                                    1-800-387-0073
                                       Toronto, Ontario                           Fax: 416-325-9195
Information and Privacy Commissioner   Canada                     TTY (Teletypewriter): 416-325-7539
Ontario, Canada                        M4W 1A8                                Website: www.ipc.on.ca
A New Possibility for Security and
              Privacy by Design:
             Fault-Free Software

                             Table of Contents

Foreword....................................................................................1

1. Introduction.............................................................................2

2. The State of Software Quality....................................................3

3. Can Faulty Software be “Fixed”?...............................................5

4. A New Approach to Fault-Free Software.....................................8

5. Security and Privacy by Design Based on
    Fault-Free Software...............................................................10

6. Conclusion............................................................................12
Foreword
Software, as we know it, was born in a Newtonian-like, mechanistic world some 50 years ago. For
the most part computers (and everything else in the universe) were considered distinct machines
and “data” was kept isolated in clearly labelled boxes. Data privacy was not an issue because no
connections were being made between the pieces of data.

We’re in a different age now and in many ways “We’re not in Silicon Valley anymore!” Deciding or
designing the level of privacy or engagement we want with our world requires us all to think on a
different level. The old rules and approaches are no longer relevant.

What does this mean to the conversation about security, privacy and technology? Marketing
guru, Seth Godin once said, “Newtonian physics has taken us as far as it can.” The co-authors
agree – the Newtonian mindset of binary choices, linear expectations and a mechanistic world
view is wearing thin, yet we continue to see the world of technology cling tenaciously to it. When
it comes to technology, we now have no option but to accept the quantum, digital perspective
of “the world we cannot see.” If only it were that easy.

Here is the problem. Fifty years of mechanistic software development has run up against this higher
perspective and this development is just not working out. The revelation in this new age is that
software generally is known as the poorest quality man-made object since the beginning of time.
We have always seen it mechanistically, therefore, we have come to accept this substandard
quality as pre-ordained – the “original sin” of technology. Many think that is just the way it is and
nothing will change it.

In a recent study on the cost of data breaches, the Ponemon Institute found that almost one-third
(29%) of data breaches involved system glitches that include both IT and business process failures.1
Examples of system glitches include application failures, inadvertent data dumps, logical errors in
data transfers, identity or authentication failures (wrongful access), data recovery failures and more.

The truth is that faulty software increases the risks to privacy, security and good design. The real
tragedy is that most of the IT world is still calling for better Newtonian mechanics, the equivalence
of trying to put the fire out by adding more fuel.

The effectiveness of Privacy by Design is directly correlated to the quality of software; the 7 Principles
of Privacy by Design must not stand firm on anything less than a foundation of fault-free software.
As remarkable or even improbable as it sounds, fault-free software is becoming a reality. But it
cannot be accessed from the perspective that created the problem in the first place.

This paper presents key elements for how we can tap into the value of software AND keep it doing
only what we want it to do in terms of ensuring our security and privacy. The way to accomplish
this is through creating the reality of fault-free software.

Ann Cavoukian, Ph.D.						 Ian Percy and Brian Smith
Commissioner							Co-founders, Emendara LLC

1   Ponemon Institute, 2013 Cost of Data Breach Study: Global Analysis, p. 7. The other two main root causes of data breaches
are malicious or criminal attacks and human factors.

                                                             1
1. Introduction
It is no secret that software is, perhaps, the faultiest product of human civilization. As Weinberg’s
Second Law2 states, “If builders built buildings the way programmers wrote programs, then the first
woodpecker that came along would destroy civilization.” Yet it is hard to find an area of human
life these days where software would not be used. When software is an integral part of the end
product, such as a computer, Internet application or the entire operating system, a software fault3
means the failure of the product. We are all familiar with never-ending “blue rotating circle” or user-
unfriendly error messages. All of a sudden a website that we often use, e.g. for Internet shopping
or TV listings, starts behaving erratically. The problem may remain unfixed for days or even weeks,
until the company notices an “unexplained” drop in sales or the number of customer complaints
exceeds a critical level. The cause of the problem could be that a programmer made a “minor”
change to a single line of code.

However, when faulty software is present in security products, this is much more troublesome than
a drop in sales or the mere nuisance of a blue rotating circle. We have to realize that no system
can really be declared secure if it is based on faulty software. The existing approaches to finding
and fixing software faults works, at best, for 75 per cent of fault occurrences.

This paper introduces a new approach to fault-free software based on semantic analysis – a
paradigm shift and a revolutionary alternative to current software testing. The paper posits semantic
analysis as an enabler of Privacy and Security by Design4 in terms of software quality.

2    Gerald Weinberg cited in: Murali Chemuturi “Mastering Software Quality Assurance: Best Practices, Tools and Technique
for Software Developers.” J. Ross Publishing, Fort Lauderdale, FL (2010).
3    For the sake of clarity, we consider a “fault” intrinsic to the code, such as a logical or an arithmetic mistake. On the other
hand, an “error” is primarily defined by a perception of a user, such as a wrong number, a program abort, etc. Errors can
occur with no mistakes in the code and can be caused, for example, by corrupt data, or can occur with a change in user
expectations.
4    Ann Cavoukian and Mark Dixon, “Privacy and Security by Design: An Enterprise Architecture Approach.” IPC, September
2013. http://www.ipc.on.ca/site_documents/pbd-privacy-and-security-by-design-oracle.pdf.

                                                                2
2. The State of Software Quality
The world runs on software and software runs on lines of code. Illogically, a Ford Taurus has 50
million lines of code while it takes a mere 7 million lines of code to fly a Boeing 7775. A smartphone
requires 1.25 million lines, your computer has 50 million, and a digital watch operates with 75,000
lines. Software is being put into our bodies and even our brains. It is in our clothing and accessories.
Entrepreneur and investor Mark Andreessen put it most colorfully when he declared “Software is
eating our world!”

In their seminal book6, Jones and Bonsignour write: “While software is among the most widely used
products in human history, it also has one of the highest failure rates of any product in human
history due primarily to poor quality.” Too often software is at the root of major business problems
and, consequently, it is justifiably blamed for those problems7.

The reality is that even in these advanced times the quality of software is troubling. Some experts,
like Prof. Martyn Thomas in the U.K. and Steve McConnell in his book, Code Complete8 have
estimated as high as 15 to 20 errors per 1,000 lines of code. Jones and Bonsignour conclude that
in a typical system application of 100,000 source code statements there are 750 defects. That is
7.5 faults per 1,000 lines and that’s where it begins to hurt. On average, just to find the location
of a fault costs about $3.50 per line of code. If we’re dealing with a typical program built with
100,000 lines of code, the cost is $350,000. The National Institutes of Standards and Technology has
identified that software defects cost nearly $60 billion annually and 80 per cent of development
costs involve identifying and correcting defects9.

With respect to privacy and security, software faults may result in much more serious failures than
simple lapses in overall functionality. Consider a few recent examples:

    • In 2012, a technical glitch inadvertently exposed six million Facebook users’ personal
      information;10

    • in 2013, the U.S. Department of Homeland Security learned of an error present for four
      years in the software used to process background checks on its employees, making
      their names, social security numbers (SSNs) and date of birth potentially accessible to
      unauthorized users;11

    • in 2012, a software glitch in the Internet Corporation for Assigned Names and Numbers’
      (ICANN’s) processing system allowed some applicants’ names and file names to be viewed
      by unauthorized users;12
5    John Ellis, “2014 SEMA Product Development Expo Keynote” (Apr 17, 2014). http://www.slideshare.net/thecorconian/2014-
sema-product-development-expo-keynote.
6    Capers Jones and Olivier Bonsignour, “The Economics of Software Quality.” Addison-Wesley Professional (2011).
7    Dan Galorath, “Software Project Failure Costs Billions.. Better Estimation & Planning Can Help” (June 7, 2012). http://www.
galorath.com/wp/software-project-failure-costs-billions-better-estimation-planning-can-help.php.
8    Steve McConnell, “Code Complete: A Practical Handbook of Software Construction,” Second Edition. Microsoft
Press (2004).
9    “The Economic Impacts of Inadequate Infrastructure for Software Testing. Final Report.” RTI Project Number 7007.011,
Research Triangle Park, NC (May 2002). http://www.nist.gov/director/planning/upload/report02-3.pdf.
10 The Associate Press, “Facebook Privacy: 6 Million Users’ Contact Information Exposed,” June 21, 2013, http://www.
huffingtonpost.ca/2013/06/21/facebook-privacy-information-exposed_n_3480916.html.
11 “Privacy Response to Potential PII Incident,” Department of Homeland Security, http://www.dhs.gov/pii (accessed
June 25, 2014).
12 See Dan Goodin, “ICANN data breach exposes gTLD applicant data, leads to deadline extension,” April 13, 2012, http://
arstechnica.com/business/2012/04/icann-data-breach-exposes-gtld-applicant-data-leads-to-deadline-extension/.

                                                               3
• in 2014, a bug in the U.S. Department of Veterans Affairs’ eBenefits portal exposed thousands
     of veterans’ personally identifiable information online, including medical and financial
     information;13 and

   • in 2014, flawed code introduced in 2011 to the OpenSSL cryptography library was publicly
     disclosed, resulting in some 17 per cent of the Internet’s secure web servers being vulnerable
     to the “Heartbleed” attack.14

With the potential for data breaches increasing as more and more technologies collect,
process and retain information about us, these examples show the need for a new paradigm of
fault-free software.

13 See Ashley Gold, “House Veterans Committee demands answers on recent VA data breach,” January 27, 2014, http://www.
fiercehealthit.com/story/house-veterans-committee-demands-answers-recent-va-data-breach/2014-01-27#ixzz35W20Dvmc.
14 “Heartbleed,” Wikipedia, http://en.wikipedia.org/wiki/Heartbleed.

                                                          4
3. Can Faulty Software be “Fixed”?
Persistent attempts to “fix” faulty software aren’t a source of encouragement either. Software
company Pattern Insight claims that “...previously fixed defects account for up to 45% of all bugs in
production software.”15 Likewise, Yin et al.,16 found 14.8% – 24.4% of “fixes” cause further problems.
Jones and Bonsignour write:

    • The software engineering population of the USA numbers around 2.4 million and at any given
       moment one million of them are spending the day fixing bugs while unwittingly injecting new
       bugs into the system.

    • Better quality software would free up about 720,000 systems personnel for more productive
      and innovative work easily reducing development and maintenance costs by 50 per cent.

    • Typical testing protocols are helping considerably, but even those leave one out of four faults
       – 25 per cent – unidentified. When you think of the implications for the technology being
       applied to health care, personal data, the financial system, national security to say nothing
       of our daily work, that failure level is rather frightening.

Roger Sessions, noted author and expert on complexity theory, developed a model for calculating
the total global cost of IT software failure. He concludes17 that if the United States could solve the
problem of IT failure, it could increase its GDP by over one trillion USD per year. At that level, the
United States is losing almost as much money per year to IT failure as it did to the financial meltdown.
What is alarmingly different is the cost of IT failure is paid year after year, with no end in sight.

Sessions also concludes that for every dollar of direct cost of software failure, $7.50 is lost to
indirect costs such as employee downtime, loss of business from disgruntled customers, legal
losses and so on.

Overall, 25 per cent of all software projects are cancelled before completion. Of the IT projects
that are initiated, from five to 15 per cent will be abandoned before or shortly after delivery18 as
hopelessly inadequate. Many others will arrive late and over budget or require massive reworking.
Gartner, a leading technology research company, found that the government sector spends about
73 per cent of its IT budget on keeping its technology systems working properly, higher than almost
any other segment19. According to the IT Dashboard website of the U.S. Federal Government20,
the 2014 IT budget is almost $59 billion, so do your math. Given the trauma of the healthcare.gov
launch, these tens of billions dollars are not likely an overestimate.

HCL Technologies, a global technology powerhouse with 90,000 systems-related employees
has found that at least 50 per cent of an application’s cost across its lifecycle goes to support

15 http://patterninsight.com/.
16 Zuoning Yin, Ding Yuan, Yuanyuan Zhou, Shankar Pasupathy and Lakshmi N. Bairavasundaram. “How Do Fixes Become
Bugs? — A Comprehensive Characteristic Study on Incorrect Fixes in Commercial and Open Source Operating Systems,”
Proceedings of the 19th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE’11), September 2011.
17 Roger Sessions, “The IT Complexity Crisis: Danger and Opportunity.” White Paper (November 8, 2009). http://sistemas.
uniandes.edu.co/~isis4617/dokuwiki/lib/exe/fetch.php?media=principal:itcomplexitywhitepaper.pdf.
18 Robert N. Charette, “Why Software Fails.” IEEE Spectrum (2 Sep 2005).
19 Kurt Potter, Michael Smith, Jamie K. Guevara, Linda Hall, and Eric Stegman, “IT Metrics: IT Spending and Staffing Report.”
Gartner, Inc. (25 January 2011). https://www.sgn.co.uk/uploadedFiles/Marketing/Content/SGN-Documents/Business-Plans/
Business-Plans-2011-Background/SGN-Business-Plan-Gartner-IT-Spend-2011.pdf.
20 https://www.itdashboard.gov/.

                                                             5
and maintenance.21 The operational cost of system support and maintenance is rising fast, they
add, “and not for the right reasons.” The fact that this is consuming such a large proportion of
the IT budget “makes it a cause for concern.” Add the discovery that fewer than 20 per cent of
organizations have a strategy to handle this financial waste and lack of system performance so
an enormous and urgent opportunity presents itself.

Current testing processes leave about 25 per cent of faults and errors undetected (see, for
example, Panko22). Further, a study by the Research Triangle Institute estimated an annual cost
of inadequate software testing infrastructure at $22.2 to $59.5 billion for the U.S. economy.
According to Gartner Research, when testing projects fail, the original technology projects
actually cost more than they return.23

Eugene Spafford, a noted computer security expert, said that “Instead of building secure systems,
we are getting further and further away from solid construction by putting layer upon layer on
top of these systems… The idea is for vendors to push things out rather than get things right
the first time.”24

The best current methods to bring software (and consequently privacy and security) under
our control involve a holistic development process that begins with strong requirements and
specifications and ends with testing the program to ensure that it delivers the correct results.
There are numerous commercial testing tools available, such as CodeSonar, Coverity, APTest,
Mago, and Fortify, to identify a few. Most testing tools work best if focused on a single program,
usually in a special development environment.

In software testing, data is injected into the program and the results examined. Valid inputs are
expected to produce valid outputs. To add surety, invalid inputs are also used to verify that the
program delivers invalid outputs. As the number of inputs increases linearly, the number of output
possibilities increases exponentially. Consequently, for any reasonably-sized program, testing every
possible combination in order to find if the program is working or not, becomes impossible.

The mechanistic response by the software engineering community has been to gather fault statistics
and information under a program of continuous improvement. Within this framework, systems
engineers developed two very important tools: static analysis and dynamic analysis.

Static analysis is a list of the commonly discovered faults in software programs. Programs are
thus analyzed for the presence of each of these common potential faults. It is the programmer’s
responsibility to correct the faults, if found. The problem with this approach is that the process will
find any of the commonly known faults … but what if the most damaging faults aren’t so obvious?
Fault discovery has to do more than find potential faults, it has to find the actual faults.

21 “5 Alternative Ideas for the Future of Application Management.” HCL Technologies (2014). www.hcltech.com/sites/default/
files/hcl_alt_asm_issue1.pdf.
22 Raymond R. Panko, “Thinking is Bad: Implications of Human Error Research for Spreadsheet Research and Practice,” (2008).
http://arxiv.org/ftp/arxiv/papers/0801/0801.3114.pdf.
23 Gartner Research and Diamond Cluster International, “Five Reasons Why Offshore Software Testing Deals Fail,” (May, 2005).
24 Eugene Spafford cited in: Robert Westervelt, “Security Expert: Industry Is Failing Miserably At Fixing Underlying Dangers,” (June
24, 2014). www.crn.com/news/security/300073238/security-expert-industry-is-failing-miserably-at-fixing-underlying-dangers.htm.

                                                                 6
Then there’s dynamic analysis, which is more sophisticated. This is a process where checkpoints
are inserted into the testing program. As the program is executed, data values are monitored at
those checkpoints to verify correctness. This provides a more realistic view of how the software
will transform the data. As with static analysis, the amount of data that can be tested is limited.
Moreover, the effective placement of the checkpoints is dependent on the skill and experience
of the tester.

Despite these tools, software continues to be deployed to customers with embedded faults of
various levels of severity25.

25 Watts S. Humphrey, “Bugs or Defects?”. http://www.sei.cmu.edu/library/abstracts/news-at-sei/wattsmar99.cfm.
                                                            7
4. A New Approach to Fault-Free Software
An approach called “semantic analysis” takes a totally different view to ensuring software accuracy.
This approach to fault-free software encompasses the following:

    • The code statements are analyzed with formal logic rules to ensure the software is semantically
      correct.

    • Arithmetic calculations are condensed and integrated into the logic analysis.

    • The output data set is tracked backwards through the logic to ensure that it is consistent with
      the input data set.

    • Computation threads are examined for end-to-end correctness.

Semantic analysis dates back to the 1993 work by Dr. Antonio Pizzarello26 and to his subsequent
patent27. Indeed, much of his work evolved while finding a way to solve the Y2K two-digit date
problem. Dr. Pizzarello noted that in order to properly understand this different methodology, it is
necessary to view computer programs from a point of view that may seem initially foreign to the
traditional methods of program analysis.

Semantic analysis of source code views programs as state machines that transition between states as
a function of predicates. A predicate is an action statement that must conform to the rules of logic.
As a program is executed, it moves from one state to another by means of its progress properties
and is constrained by its safety properties to stay within the bounds of its allowed transformations.
In this view, a program controls the flow of data from set to set using a set of predicates that must
be correct logically rather than a series of commands that operate on a datum.

Using this approach it becomes possible to analyze the results of the code without requiring its
execution. As shown in Figure 1, this is accomplished by using the notions of preconditions and
post-conditions – input and output states. After the code under investigation is translated into a
declarative non-procedural execution model (using Dijkstra Guarded Command format), a post-
condition to precondition computation thread can be isolated and examined. If the predicates
in that thread are not logically correct or if the state transformations cannot be achieved, a fault
in the program exists and will be identifiable. More information about semantic analysis can be
found on the Emendara website28.

26 A. Pizzarello, “New Method for Location of Software Defects.” AQuis ‘93. Venice, Italy. Oct. 1993. pp. 143-156.
27 Ashraf Afifi, Dominic Chan, Joseph J. Comuzzi, Johnson M. Hart, and Antonio Pizzarello. “Method and apparatus for analyzing
computer code using weakest precondition.” U.S. patent 6029002 (Feb 22, 2000; filed Nov. 13, 1995). http://www.google.com/
patents/US6029002.
28 www.emendara.com.

                                                              8
Figure 1. High level process overview for Semantic Analysis.

Some of the technical benefits of semantic analysis are:

   • The computational correctness of the code statements is verified through formal logic rather
     than being inferred by a response to data.

   • Interfaces are verified by adding modules, programs and systems together as opposed to
     being verified in isolation.

   • Data is evaluated as a set rather than data points.

   • The scope of the data set determines the scope of correctness.

This last point is the key to addressing the system design and architecture. A program can be
completely fault-free but still produce errors. That is, while a program may be inherently correct, a
change in the data set or external environment can produce a result that the user does not want.
Or, put more bluntly, the program is correct but the data is bad – otherwise known as Garbage
In = Garbage Out!

This problem can be addressed by including different parameters in the definition of the data
set, so that the verification of the software can ensure different design standards. Therefore, this
new approach sets itself apart from current attempts at software quality such as design reviews,
verification and validation tasks, unit tests, integration tests, system acceptance tests, operational
(field) activation tests, regression tests and readiness confidence tests. In other words, it eclipses
current assessment programs with a more focused approach to software analysis. Regardless
of the intent expressed in specification or design documents, the performance of a program is
always measured in terms of how it actually performs, not by what is expected.

                                                       9
5. Security and Privacy by Design Based on
Fault-Free Software
Semantic analysis may be applied to the code base of software systems as an essential step
towards operationalizing Security and Privacy by Design in those systems. In order for systems to
achieve Security and Privacy by Design, their design and architecture must be verified and the
integrity of their functionality within networked environments must be assured.

Unlike in the early days of computing, most systems we use today, such as personal computers,
smartphones and the lnternet, are continuously computing machines interacting with a non-
predictable environment. Rather than terminate, they continue to accept inputs from an environment
and produce outputs for an unbounded duration of time. All such systems are characterized by
the presence of an environment that intervenes in a completely unpredictable manner at any
time. ln many cases, there is an additional danger of malware which is designed specifically to
cause trouble.

Security by Design in software engineering means that the software has been designed based
on the system requirements being secure. Malicious intrusions and attempts to steal data are
expected and procedures are developed to detect and prevent them.

In terms of Security by Design, it is important to note that the by Design is related to the system
architecture. Whereas conventional architecture defines the rules and standards for physical
buildings, information system architecture addresses parallel issues for the design and construction
of computers, communication networks and the distributed business systems that are implemented
using these technologies.

Using the metaphor of physical buildings, data forms the “material” of the business system and
the computational logic makes the connections that make the data useful.

From an information security point of view, the dilemma is if something ‘breaks,’ was it a fault in
the material or was the construction process flawed? This is exactly the question that needs to be
asked when an error occurs in software. There is also an additional question that can be asked
of a software system: “Does this system operate as intended in spite of an attempt to abuse or
subvert it?”

Years ago, most people would not consider this a fair question. After all, the software was built for
one purpose and now it is being asked to do additional things. Is it possible to create software
capable of such flexibility … and how can we know it will handle everything that comes along?
From a business or operational point of view, this question must be answered.

If an enterprise wants continued market success, the system(s) it relies on cannot ignore significant
changes in its environment. The use of semantic analysis in the development process addresses
this by providing a rapid and effective response. By updating the parameters that define the input
data set to include new threat profiles, the specific impacted computational threads can be
isolated and the location of the new “fault” identified. The additional complication is that some
faults may be due to a lack of, or missing, code.

                                                 10
Bad actors have been active for many years now and their modi operandi have been well
characterized. Current defenses against malware are based on electronic “signatures” – specific
sequences of instructions that indicate their presence.

Semantic analysis can identify precisely the presence of malware. When a semantic analysis of
the code is performed, a functional signature can be generated. This signature characterizes
what the software actually does; any departure from the intended operation can be flagged
immediately. When this signature is used to monitor the real-time operation of the system, a very
strong level of security is established.

It is important to understand that the security of the system significantly diminishes if a program has
faults, errors and bugs. Simply put, faulty software just encourages and attracts hackers who use
“automation, global collaboration, seek greater efficiency, and are developing ever more powerful
threats every day”29. The first line of defense when it comes to cyber security is fault-free software.

As has been noted, whether we’re discussing security or privacy from a quality of software standpoint,
it’s really all one thing. That said, there are specific thoughts on privacy worthy of consideration.

In the same way that Security by Design has moved from being a purely mechanistic matter, to
assuming a holistic view of the entire organization’s operations, Privacy by Design has emphasized
the need to adopt a proactive rather than a solely reactive regulatory compliance approach30.

There are three aspects to fully achieving the implementation of PbD to a verifiable level using
semantic analysis.

    1. The software implementation of a design feature must be verified to actually implement the
       intent of the design, no more and no less.

    2. The integrity of the system within the secured network environment must be assured.

    3. The parameters of personal privacy must be defined as part of the input data set used by
       semantic analysis.

In terms of Privacy by Design, the beauty of semantic analysis is that it can be fine-tuned to privacy-
specific issues. In other words, semantic analysis can identify the use of personal information (PI)
with considerable accuracy. PI is treated as any other variable(s) of interest. Once the semantic
analysis of the code is performed, the output data set is tracked backwards through the logic to
ensure that it is consistent with the input data set. Very simply, if the output data set is constrained
to exclude PI, the program logic must correctly handle an occurrence of PI in the input data set.
As a result, any attempts, both “authorized” and unauthorized, to divulge PI will be identified and
handled accordingly. By using such semantic analysis which is fine-tuned to privacy issues, a third
party could verify that software is compliant with the PbD principles. Therefore, semantic analysis
becomes the most powerful PbD enabler in terms of software quality.

29 http://www.livesquare.com/.
30 A. Cavoukian, “Privacy by Design. The 7 Foundational Principles.” IPC (August 2009). http://www.ipc.on.ca/images/resourc
es/7foundationalprinciples.pdf.

                                                            11
6. Conclusion
This paper has been built on several assumptions. First, that we need to move from a mechanistic,
Newtonian mindset to an energetic one of new possibilities; a quantum mindset, if you will. Second,
that the world does indeed run on software and that it is possible to both take advantage of
this incredible power while at the same time shape it to our specific needs. Third, if we are to
achieve both control and freedom as reflected by the Principles of Privacy by Design, we can
do so only on a foundation of fault-free software. Fourth, the application of semantic analysis
is a revolutionary technology capable of ensuring that foundation regardless of the complexity
and dynamic of the environment.

The audacious goal of fault-free software is now within reach. The technology required to make
that a reality is available. What is most needed are open minds willing to explore new possibilities
of Security and Privacy by Design.

                                                12
About the organizations

IPC
Information and Privacy Commissioner of Ontario, Dr. Ann Cavoukian, is recognized as one of the leading
privacy experts in the world. Noted for her seminal work on Privacy Enhancing Technologies (PETs) in 1995,
her concept of Privacy by Design seeks to proactively embed privacy into the design specifications of
information technology and accountable business practices, thereby achieving the strongest protection
possible. Since then, Privacy by Design has become a de-facto international standard. Overseeing the
operations of the access and privacy laws in Canada’s most populous province, Commissioner Cavoukian
serves as an Officer of the Legislature, independent of the government of the day. Commissioner Cavoukian
was re-appointed for an unprecedented third term.

Emendara LLC
Emendara – Latin for “to remove or mend fault” – is an ambitious startup based in Scottsdale, Arizona. With
an expert team of engineers and computer scientists, Emendara is at the forefront of the field of semantic
analysis. The company is applying semantic analysis in a technology they call Computational Logic
Verification© which will be capable of rapidly identifying, locating and delivering an inventory of all faults,
errors and bugs in any software system or system of systems. Through such an approach, Emendara sets
itself apart from the usual testing tools. The team wants to change the paradigm for software quality. They
see fault-free software as the foundation for both Security and Privacy by Design and are determined to
rattle the software world for all the right reasons. At the time of writing, Emendara is creating an automated
device which when fed source code will deliver an inventory of faults and errors at remarkable speed. The
company is looking to ally with both developers and end-users in refining CLV technology.
Information and Privacy Commissioner of Ontario
2 Bloor Street East, Suite 1400
Toronto, Ontario
Canada M4W 1A8
Telephone: (416) 326-3333
Fax: (416) 325-9195
E-mail: info@ipc.on.ca
Website: www.ipc.on.ca

Emendara LLC
Telephone: (450) 502-3898
E-mail: info@Emendara.com
Website: www.emendara.com

The information contained herein is subject to change without
notice. Emendara and the IPC shall not be liable for technical or
editorial errors or omissions contained herein.
                                                                    Information and Privacy Commissioner,
June 30, 2014                                                                 Ontario, Canada
You can also read