See Attached
Assignment Instructions
For this Assignment, you will use several models and sets of questions defined in your textbook reading to analyze an ethical information technology scenario.
Choose from one of the following scenarios:
1.
AI, Death, and Mourning Using a “Grief Bot”
2.
Privacy, Technology, and School Shootings
3.
The Vulnerability Disclosure Debate
4.
The “Goodbye Fears Monster”
5.
Students and Sensors: Data, education, privacy, and research
Perform your analysis using each of the methods below:
Part 1: Answer the following questions about your chosen scenario:
1. Does it preserve human dignity? Does it enhance human dignity?
2. Does it preserve the autonomy of the human?
3. Is the data collection and processing necessary and proportional?
4. Does it uphold the common good?
Part 2: For your particular technology, complete the following from the perspective of the designer or developer. (There are examples Ethical data and information management: Concepts, tools, and methods, in Chapter 10, Tables 10.4 and 10.5.)
We want to |
|
|||||
So that we can |
||||||
Which will deliver the following benefits |
||||||
To the following stakeholders |
||||||
Part 3: Consider the environment of your scenario from four perspectives:
1. Social (in the context of society and the organization culture)
2. Technical (Consider the technical architecture and design.)
3. Legal (Consider the legal issues that could affect the scenario.)
4. Moral (Consider the ethical and moral factors of the scenario.)
You can use a diagram similar to the one in Figure 10.6 of your textbook.
In completing the Moral factor, rely on the utility versus invasiveness matrix and discuss it in these terms.
High utility — low invasiveness (ethical risk far outweighed by the benefit to individual or society)
High utility — high invasiveness (requires additional controls to reduce invasiveness)
Low utility — high invasiveness (need invasiveness reduced and/or utility increased)
Low utility — low invasiveness (no value is being added to society or individuals)
After completing this systematic ethical assessment and risk impact, provide a synthesis of what you have learned and write a recommendation as to whether use of this technology should continue. Provide justification for your recommendation through logical argument and supporting sources. You should also provide related examples that help support your recommendation.
Assignment Requirements
1. Select and read one case study scenario.
2. Answer the four ethical questions about the scenario in Part 1.
3. Complete the form in Part 2 from the perspective of the technology designer or developer.
4. Analyze the technology from the social, technical, legal, and moral perspective in Part 3.
5. In completing the moral perspective in Part 3, use the utility versus invasiveness matrix.
6. Synthesize what was learned in Parts 1–3.
7. Recommend whether technology should be used or continue to be used.
8. Provide supporting sources and examples to justify the recommendation.
The paper should be 3–4 pages and formatted in APA style.
IT590 Unit 6 Assignment Rubric
Course: IT590 Legal and Ethical Issues in IT
Criteria 1
Level III Max
Points
55 points
Level II Max
Points
46.75 points
Level I Max
Points
38.5 points
Not Present
0 points
Criterion Score
Unit 6 Assignment Dropbox – IT590 Legal and Ethical Issues in IT – Pu… https://purdueglobal.brightspace.com/d2l/lms/dropbox/user/folder_subm…
1 of 4 3/4/2023, 9:32 PM
javascript://
javascript://
javascript://
https://purdueglobal.brightspace.com/d2l/home/243918
https://purdueglobal.brightspace.com/d2l/lms/dropbox/user/folders_list.d2l?ou=243918
https://purdueglobal.brightspace.com/d2l/lms/dropbox/user/folders_list.d2l?ou=243918
Criteria 1
Level III Max
Points
55 points
Level II Max
Points
46.75 points
Level I Max
Points
38.5 points
Not Present
0 points
Criterion Score
Criteria 1:
Ethical
Analysis of
Technology
/ 55Meets all
criteria:
• Answers
the
four ethical
questions
about the
scenario in
Part 1.
• Completes
the form in
Part 2 from
the
perspective of
the
technology
designer or
developer.
• Analyzes the
technology
from the
social,
technical,
legal,
and
moral
perspective in
Part 3.
• In
completing the
moral
perspective in
Part 3, uses
the utility
versus
invasiveness
matrix.
Meets three
criteria:
• Answers the
four ethical
questions
about the
scenario in
Part 1.
• Completes
the form in
Part 2 from
the
perspective of
the
technology
designer or
developer.
• Analyzes the
technology
from the
social,
technical,
legal, and
moral
perspective in
Part 3.
• In
completing the
moral
perspective in
Part 3, uses
the utility
versus
invasiveness
matrix.
Meets two
criteria:
• Answers the
four ethical
questions
about the
scenario in
Part 1.
• Completes
the form in
Part 2 from
the
perspective of
the
technology
designer or
developer.
• Analyzes the
technology
from the
social,
technical,
legal, and
moral
perspective in
Part 3.
• In
completing the
moral
perspective in
Part 3, uses
the utility
versus
invasiveness
matrix.
Does not meet
any criteria.
Unit 6 Assignment Dropbox – IT590 Legal and Ethical Issues in IT – Pu… https://purdueglobal.brightspace.com/d2l/lms/dropbox/user/folder_subm…
2 of 4 3/4/2023, 9:32 PM
Criteria 2
Level III Max
Points
55 points
Level II Max
Points
46.75 points
Level I Max
Points
38.5 points
Not Present
0 points
Criterion Score
Criteria 2:
Synthesis
and
Recommend
ation
/ 55Meets all
criteria:
• Synthesizes
what was
learned in
Parts 1–3.
•
Recommends
whether
technology
should be
used or
continue to be
used.
• Provides
supporting
sources and
examples to
justify the
recommendati
on.
Meets two
criteria:
• Synthesizes
what was
learned in
Parts 1–3.
•
Recommends
whether
technology
should be
used or
continue to be
used.
• Provides
supporting
sources and
examples to
justify the
recommendati
on.
Meets one
criterion:
• Synthesizes
what was
learned in
Parts 1–3.
•
Recommends
whether
technology
should be
used or
continue to be
used.
• Provides
supporting
sources and
examples to
justify the
recommendati
on.
Does not meet
any criteria.
Criteria 3
Level III Max
Points
10 points
Level II Max
Points
8.5 points
Level I Max
Points
7 points
Not Present
0 points
Criterion Score
Unit 6 Assignment Dropbox – IT590 Legal and Ethical Issues in IT – Pu… https://purdueglobal.brightspace.com/d2l/lms/dropbox/user/folder_subm…
3 of 4 3/4/2023, 9:32 PM
Total / 120
Overall Score
Criteria 3
Level III Max
Points
10 points
Level II Max
Points
8.5 points
Level I Max
Points
7 points
Not Present
0 points
Criterion Score
Criteria 3:
APA Style
and Writing
Conventions
/ 10Meets all
criteria:
● Applies
current APA
style to in‐text
citations and
references,
and document
formatting if
appropriate,
with minor to
no errors.
● Writing is
focused,
concise, and
organized and
articulates at a
college level,
with minor to
no errors.
● Uses
resources from
reliable and/or
scholarly
sources.
Meets two
criteria:
● Applies
current APA
style to in‐text
citations and
references,
and document
formatting if
appropriate,
with minor to
no errors.
● Writing is
focused,
concise, and
organized and
articulates at a
college level,
with minor to
no errors.
● Uses
resources from
reliable and/or
scholarly
sources.
Meets one
criterion:
● Applies
current APA
style to in‐text
citations and
references,
and document
formatting if
appropriate,
with minor to
no errors.
● Writing is
focused,
concise, and
organized and
articulates at a
college level,
with minor to
no errors.
• Uses
resources from
reliable and/or
scholarly
sources.
Does not meet
any criteria.
Level III
90.01 points minimum
Level II
84.01 points minimum
Level I
1 point minimum
Not Present
0 points minimum
Unit 6 Assignment Dropbox – IT590 Legal and Ethical Issues in IT – Pu… https://purdueglobal.brightspace.com/d2l/lms/dropbox/user/folder_subm…
4 of 4 3/4/2023, 9:32 PM
Ethical Data and Information Management: Concepts, Tools and Methods
by Katherine O’Keefe and Daragh O Brien
Kogan Page. (c) 2018. Copying Prohibited.
Reprinted for Personal Account, Purdue University Global
none@books24x7.com
Reprinted with permission as a subscription benefit of Skillport,
All rights reserved. Reproduction and/or distribution in whole or in part in electronic,paper or other forms
without written permission is prohibited.
This chapter will discuss the concept of assessing ethical risks and impacts through the comparable and more familiar concept
of a privacy impact analysis. Examples of key questions you should be asking and answering in this planning process include:
What are the potential real-world impacts of your information process?
Does your new idea for a product, service or other process create a win-win situation or is it potentially predatory or
harmful?
Are you overlooking a potentially easily solvable issue that, if not addressed, could cost a great deal in terms of
reputational damage or reactive efforts to fix?
You will learn about the concepts of Ethical Impact Assessments in the context of a quality systems-based approach to ethical
information management. By the end of this chapter you will have a deeper understanding of the relationship between planning
for quality, risk management and ethical information management practices.
Effective risk management is a key component of any management system. In Chapter 8 we discussed the concept of ethical
information management as a quality system. This is an important conceptual connection in the design of the E2IM framework
for ethical information management. Your objective in the ethical management of information is to ensure that the information
and process outcomes that are delivered to your stakeholders in society meet the ethical expectations of your stakeholders,
such as supporting rights to privacy or enabling the support of or improvement of human dignity or freedom of expression.
Indeed, Tom Peters describes management as ‘the arrangement and animation of human affairs in pursuit of desired outcomes’
(Peters, 2015).
As we introduced in Chapter 8 and discussed further in Chapter 9, if you consider ethical information management as quality
system, you can begin to seek out principles and practices from other domains of management and information management to
help you build ethics into information management. By adopting and adapting proven methods to support the arrangement and
animation of your information management affairs, you can consistently delight your stakeholders with outcomes that are
aligned with the ethic of society or exceed the positive expectations of society as to what good information management
practices and ethics can be.
You have also seen how information management is on the brink of a crisis of confidence as the ethical risks and pitfalls of new
technologies we are adopting are becoming more apparent in the mainstream. In many respects, we are facing a crisis, just as
manufacturing faced a quality crisis in the 1980s (see Table 10.1). One of the subtexts of the E2IM framework and the
approach we have taken to this book is that there are patterns, principles and practices in history that we can learn from. In
Table 10.1 we have taken some statements from a paper presented to the American Society for Quality (ASQ) in 1986 (Juran,
1986). You can see some interesting parallels with the challenges we face today.
Table 10.1: Mapping Juran’s quality crisis to information ethics challenges
Joseph Juran Statements (paraphrased for brevity) Ethical Information Management Equivalent
There is a crisis in quality. The most obvious outward evidence is the
loss of sales to foreign competition and the huge costs of poor quality
There is a crisis in information ethics. The most obvious evidence is the
concerns about algorithmic bias and the potential for misuse of big-data
technologies and the huge risks of abuse
The crisis will not go away in the foreseeable future. Competition in
quality will go on and on. So will the impact of poor quality on society
The crisis will not go away in the foreseeable future. Technological
evolution will go on and on. So will the impact of poor consideration of
ethical issues on society
Our traditional ways are not adequate to deal with the quality crisis.
Our adherence to those traditional ways has helped to create the
crisis
Our traditional approaches to IT project management are not adequate to
deal with the ethical crisis. In a sense our adherence to those traditional
ways has helped create the crisis
Charting a new course requires that we create a universal new way of
thinking about quality – a way applicable to all functions and to all
levels in the hierarchy
Charting a new course requires that we create a universal way of thinking
about ethics in information management – a way applicable to all
functions and all levels in the hierarchy
An essential element in meeting the quality crisis is to arm upper
managers with experience and training in how to manage for quality,
An essential element in meeting the information ethics crisis is to arm
upper managers with experience and training in how to ethically manage
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 2 of 18
Table 10.1: Mapping Juran’s quality crisis to information ethics challenges
Joseph Juran Statements (paraphrased for brevity) Ethical Information Management Equivalent
and to do so with a sense of urgency information, and to do so with a sense of urgency
Charting a new course also requires that we design a basis for
management of quality that can readily be implanted into the
company’s strategic business planning and has minimal risk of
rejection by the company’s immune system
Charting a new course also requires that we design a basis for
management of information ethics that can readily be implanted into the
company’s strategic business planning and has minimal risk of rejection
by the company’s immune system
In Juran’s 1986 paper he introduced a fundamental concept of quality management that has become known as the Juran
Quality Trilogy (Figure 10.1). The underlying concept of the Juran Quality Trilogy is that managing for quality consists of three
basic quality-oriented processes:
quality planning;
quality control;
quality improvement.
Figure 10.1: The Juran Quality Trilogy
Juran viewed these as being universal processes that exist across a range of activities, but he explicitly called out quality
planning as the starting point for all things quality. As he put it (Juran, 1986):
The starting point is quality planning – creating a process that will be able to meet established goals and do so under
operating conditions. The subject matter of the planning can be anything.
This is precisely what we are seeking to achieve from the perspective of ethical information management, the creation of a
process (or set of processes) that will be able to meet established ethical goals and do so under operating conditions. Our
subject matter is data and information, which might be directly about people or might indirectly relate to them, or which may
lead to outcomes that are positive or negative for individuals or society. What is required is a planning process where the
uncertainties around the alignment of business, information and technology domains are addressed to ensure the consistent
delivery of information and/or process outcomes that are ethically acceptable to society.
As with any other potential product or project, it is worthwhile for you to conduct an assessment to identify potential issues
affecting your proposed processing activities as early as possible in the life cycle of a process or initiative. This should then
inform your planning for how to address the ethical issues or considerations that may arise.
Juran drew parallels between his Quality Trilogy and the trilogy of processes that exist in the financial management function of
the organization (budgeting, cost control and cost reduction). When discussing his Quality Trilogy in 1986, he described how
he would ‘look sideways’ at how finance is managed, to call out the parallels. In the spirit of Juran, you need to look sideways
at other models of quality management for information to identify approaches and methodologies that might bring forth parallels
that your ethical information-planning process can be modelled on.
This is an important consideration given the sentiments expressed by Juran in Table 10.1 in relation to the need for a speed of
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 3 of 18
implementation of skills and training in quality management, and the need to do it in a way that does not meet with excessive
resistance from the organization:
You need to be able to train management quickly in how to do these types of assessments and to understand how to
manage for ethical information and process outcomes.
You need to be able to introduce processes for ethical information planning into the organization in a manner that will have
‘minimal risk of rejection by the company’s immune system’ (Juran, 1986).
Thankfully, by taking a quality systems approach to ethical information management, you can readily identify parallels with the
planning principles of information quality management and data governance. However, a closer parallel can be found in the
disciplines of
and
.
Privacy by Design
Privacy by Design is an approach to designing and developing information management and processing systems that requires
privacy and human values to be taken into account throughout the entire life cycle of the design, build and operation of the
system. The concept of Privacy by Design has been popularized by Dr Ann Cavoukian when she was serving as the
Information and Privacy Commissioner for the Province of Ontario in Canada (Cavoukian, 2011).
Privacy by Design is characterized by seven foundational principles that map to fundamental principles of quality management,
as set out in Table 10.2.
Table 10.2: Privacy by Design principles
1 Proactive not reactive; preventative not remedial
2 Privacy as the default setting
3 Privacy embedded into design
4 Full functionality – positive-sum, not zero-sum
5 End-to-end security – full life-cycle protection
6 Visibility and transparency – keep it open
7 Respect for user privacy – keep it user-centric
that these are essentially statements of ethical principles relating to privacy and security applied to the design of
information processes. It should be no surprise therefore that Dr Cavoukian has recently published an updated set of principles
specifically for the development of AI Ethics by Design (Cavoukian, 2017).
Privacy Engineering
Where Privacy by Design, and its cousin AI Ethics by Design, are concerned with defining design principles for privacy and AI
ethics, Privacy Engineering is concerned with getting things built with privacy baked in and improving the function. Privacy
Engineering takes methodologies and practices from software engineering, information management and business process
engineering, amongst other disciplines, to enable you to implement the development of systems and technologies that support
Privacy by Design principles. The underlying concepts of Privacy Engineering are best explained in books by Michelle
Dennedy (Dennedy, Finneran and Fox, 2014) and Ian Oliver (Oliver, 2014). It is outside the scope of this chapter to dive deeply
into the detail on those topics.
What we will focus on, however, is the model that Dennedy describes for the Privacy Engineering process. This process
encompasses the three elements of the Juran Quality Trilogy and provides a model we can adapt to represent the process for
engineering ethics into information processes (Figure 10.2; Table 10.3).
Table 10.3: Mapping Privacy Engineering to Juran’s Quality Trilogy
Juran Quality Trilogy Component Privacy Engineering Development Step
Planning • Understanding goals of organization and individuals
• Privacy policy
• Requirements
Control • Procedures and processes
• Privacy awareness training
• Quality mechanisms
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 4 of 18
Table 10.3: Mapping Privacy Engineering to Juran’s Quality Trilogy
Juran Quality Trilogy Component Privacy Engineering Development Step
Improvement • Quality assurance
• Quality assurance feedback
SOURCE adapted from Dennedy (2014)
Figure 10.2: The Privacy Engineering development process
One of the key tools used in the planning and quality assurance phases of Privacy Engineering is a Privacy Impact
Assessment (Dennedy, Finneran and Fox, 2014). Privacy Impact Assessments (PIAs) are a process that can help you identify,
prioritize and mitigate privacy-related risks during the design and development of systems and processes. They help you
implement Privacy by Design principles as an ethos in the development life cycle. As a governance tool, Privacy Impact
Assessments ensure regulatory compliance and adherence to standards by making sure the rules are defined and applied to
your proposed processing activities.
In some situations, and locations, Privacy Impact Assessments may be a statutory or contractual requirement. In many
jurisdictions, they are a requirement for public-sector bodies or bodies receiving public funds. Under the EU General Data
Protection Regulation, ‘Data Protection Impact Assessments’ are required in many cases. Impact assessments may also be
required as part of contractual terms of a project. The EU’s Article 29 Working Party (or the European Data Protection Board
as they will be known after 25 May 2018) also explicitly references the need to conduct PIAs as an iterative process where
individual steps may need to be repeated as ‘the development process progresses because the selection of certain technical
or organizational measures may affect the severity or likelihood of the risks posed by the processing’ (Article 29 Working Party,
2017).
Other reasons why organizations would consider carrying out a Privacy Impact Assessment include:
Risk management – in addition to data privacy risks other risks such as ethical risks can be identified. The organization
can also identify risks associated with the internal culture and ways of thinking about data. A PIA requires you to make
formal decisions about what you will do about those risks.
Organizational learning – this goes to Juran’s point about needing to develop management competence in these areas.
PIAs can help the organization learn about and better understand data privacy risks, the nature of their data flows, and the
perspectives of their stakeholders and customers on data issues.
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 5 of 18
It is good practice to conduct your PIAs iteratively and review at different stages in the development and implementation of a
project. This allows for iterative elaboration of detail and refinement of your plan. It also allows the PIA process to be used as a
quality control and validation process to make sure that the things you had identified as needing to be done are actually done.
Reflecting the Concerns of Individuals and Society
One key element of the regulatory guidance on Privacy Impact Assessments globally, in particular in the European Union, is the
clear focus on the need for the assessments to reflect privacy concerns of individuals and society. In this context, the PIA
requires the organization to assess the ‘privacy risk appetite’ of society to ensure that the developed solutions and processes
meet the expectations and needs of society. This is explicitly referenced in Dennedy’s recognition of the need to address both
the goals of the organization as well as the goals and requirements of the individuals affected by your proposed use of data.
Towards Ethical Information Engineering?
This requirement in Privacy Impact Assessments to consider the external stakeholder’s concerns and expectations is entirely
consistent with the stakeholder expectation component of the E2IM framework. After all, the objective of ethical enterprise
information management is to ensure that the right outcomes are being delivered to the stakeholders in society. In that context,
we can reimagine the E2IM framework as a variation on Dennedy’s Privacy Engineering process. Just like Dennedy’s model,
the various stages in this process map to Juran’s Quality Trilogy (Figure 10.3).
Figure 10.3: The ethical information engineering process
Just like the discipline of Privacy Engineering, the planning process requires you to conduct some form of assessment to
enable you to determine questions of policy and requirements for implementation of your processes, training and controls. That
assessment will also provide a quality assurance function by enabling you to check if the things you determined needed to be
done actually were done. In short: effective ethical information engineering requires an Ethical Impact Assessment.
Because you are not reinventing the wheel when it comes to conducting your Ethical Impact Assessment (EIA), at this point you
should be able to identify methods and processes from Privacy Impact Assessments or other risk-assessment processes in
your organization that you can adopt and adapt. If not, the rest of this chapter provides an overview of a model approach you
can use.
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 6 of 18
Principles
Privacy by Design provides a strong core set of design principles that can guide a Privacy Impact Assessment. But where can
you look to find equivalent principles for ethics? By drawing on and distilling the ethical principles and models we discussed in
the first half of this book, we have codified five basic interrogative rules to help formulate your starting position for analysis.
These interrogatives seek a positive outcome as a determiner of ethical action. Where the positive contribution to the social
good is not the priority, it balances the priorities against the social ethic of the necessity of preserving human rights. An action
with an outcome that violates these rights may be expected to come into conflict with the societal ethic that regards human
rights as a fundamental priority. As modern information management capabilities may process, combine or link, and make
available vast amounts of information, it is important to consider the outcomes resulting from data processing that are not the
focus or intended outcome. This test will need to consider not just the intended outcome but other anticipated possible
outcomes.
We explored these questions in Chapter 7 with some worked example scenarios. You will recall that these questions can often
be straightforward to answer, but can, and should, provoke debate, particularly where the data that is proposed to be
processed is particularly sensitive or the potential impacts on individuals are significantly far-reaching. For example, in the
context of individuals with diminished or diminishing capacity to make informed choices about how their information is
processed, what ethical issues might arise? We will use this scenario as a reference in the rest of this chapter.
Scenario: applications of life-logging technology for Alzheimer’s patients
An organization is developing advanced life-logging capabilities to aid people suffering from conditions affecting their memory
and cognitive processes. Day-to-day actions and events are recorded to serve as a reviewable record of events, acting, in
effect, as a prosthetic memory.
Question 1: Does it preserve human dignity? Does it enhance human dignity?
As this application of technological advancements might possibly do a great deal to ease the distress of a person
suffering from conditions such as Alzheimer’s disease, it could very much enhance the dignity of the person.
Question 2: Does it preserve the autonomy of the human?
The planned capabilities of the technology would help to preserve the autonomy of the device-wearer. However, the life-
logging technology would by its nature record the interactions of the device-wearer with other people, capturing their
personal data as well. Controls would need to be implemented to take their autonomy into account, including the
possibility of choosing not to have their data processed.
Question 3: Is it necessary and proportionate?
In the context of the device-wearer, the processing would likely be necessary and proportionate. However, the question of
necessary and proportional processing also arises in the context of the other people the device-wearer comes into contact
with. Measures should be taken to ensure that processing of the personal information of these people is minimized,
particularly if there are no measures in place to ensure free and informed consent.
Question 4: Does it uphold the common good?
This application of technology is primarily focused on the enhancement of individuals’ dignity, but it could also be argued
that its availability would also be of more general benefit to communities as a whole. Family and friends of a person
affected by Alzheimer’s disease might also benefit from its use. Developments in care to aid members of a community are
likely to improve the community as a whole.
Good governance requires decision-making processes to be recorded. If you are to ensure the alignment of the ethic of the
organization with the ethic of society, and if you are to properly recognize controls and other situational modifiers for the ethic of
the individual (Trevino, 1986), a more formal analysis of the ethical issues and risks in the proposed processing activity is
required. This is especially the case if you want to be able to audit your processing later, or if you want to support the
development of effective organizational learning about ethics and their application in information management.
To that end, you need a process!
Process
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 7 of 18
In our consulting work, we like to find models and methods that we can use over and over again to simplify the execution of
processes for clients. In our experience, if you are trying to get management in an organization to adopt a new way of doing
something, it should be as simple as possible. Ideally, it should also be a process that can be applied to different aspects of the
organization.
Our impact assessment framework is an adaptation of Danette McGilvray’s ‘10 Steps to Trusted Information process’
(McGilvray, 2008). This process works well as it is a simple, structured method that follows a clear and logical flow. In our
consulting work, we use this methodology for Privacy Impact Assessments (PIAs) and Ethical Impact Assessments (EIAs). As it
is grounded in quality management principles and methods, it is a perfect fit for the quality systems-based approach to ethical
information management that you will be applying through the E2IM framework. In addition, it provides a relatively standardized
way of working for management and staff who may already be looking at information quality problems and opportunities in your
organization. Finally, this framework allows for iterative loops and refinement of the proposed processing activities, depending
on the ethical or privacy issues that are identified (Figure 10.4).
Figure 10.4: Castlebridge Ethical Impact Assessment methodology
This methodology also supports a clear separation of duties between the assessment phase and the
remediation/implementation phase of the process. This is in line with good practice in data governance (Figure 10.5).
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 8 of 18
Figure 10.5: Castlebridge Ethical Impact Assessment method – phases highlighted
This approach to structuring an EIA allows for a common set of process steps to be conducted regardless of the scale of the
PIA or the range of jurisdictional variants on recommended PIA process steps that a project may require. It also allows for
iterative review through the assessment phase if additional detail is required to identify root causes, inform improvement plans,
or understand the impacts of proposed processing on individuals.
The Inputs and Outputs of the Process
In this section, we outline the key process steps for the EIA. Note that the Impact Assessment phase of this method, set out
below, extends to Step 6: ‘develop improvement plans’. The actual implementation of recommendations and requirements from
an Impact Assessment is the responsibility of the teams in your organization who are developing and implementing the
proposed processing activities or information management systems. The objective of the assessment is to identify
requirements for processes and procedures, training or other control mechanisms for ethical outcomes that need to be
designed in to avoid or mitigate ethical risks.
Step 1: Define Business Need and Approach
Clarity on the goal is an essential part of quality management. You need to think about what the desired information and
process outcomes are that you are aiming to deliver. Without that clarity, there is a risk of misunderstanding, miscommunication
or failure to identify critical risks.
This process begins with a requirement for a clear statement of the business need and approach for the proposed processing.
This is an important first step in the methodology as it supports the following key functions:
✔ Determination of what kind of assessment is required. Are you going to constrain your analysis to just the privacy and
privacy-derived outcomes in a Privacy Impact Assessment, or are you conducting a broader Ethical Impact Assessment?
✔ Defining the purpose of the proposed processing. What is the objective? What are the information and process
outcomes you are trying to achieve?
✔ Identifying if there are multiple purposes and outcomes potentially to be achieved within the proposed processing, and
identifying if there might be conflicts between those goals.
✔ Identifying the proposed benefits from the proposed processing of information.
✔ Identifying the relevant stakeholders and beneficiaries of the proposed processing.
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 9 of 18
✔ Define processing activities that are not in scope for the impact assessment.
This definition of your need and approach will be an important reference throughout the rest of the assessment, and indeed will
be an important reference point for data governance and other control activities after the proposed information processing and
management capability is deployed.
You can derive your business need and approach from the project charter or scope documents for a given project. For an
Ethical Information Assessment, the focus in this instance is on the proposed processing of information, and information and
process outcomes that your organization is looking to deliver as a result. It is important in the definition of the business need
and approach that attention be paid to the needs of and benefits to individuals as stakeholders.
This is an essential requirement to ensure that you are considering what the expectation of the ethic of society would be in the
context of your processing activities, and who that society is made up of. It is critical at this point to ensure this is done to
ensure that an appropriate stakeholder theory normative approach to ethics can be applied. As a principle, it is one we find
articulated in legal concepts such as the ‘Neighbour Principle’ in Tort law.1 This principle holds that ‘one must take reasonable
care to avoid acts or omissions that could reasonably be foreseen as likely to injure one’s neighbour’. Your ‘neighbour’ in turn
is defined as ‘someone who was so closely and directly affected by the act that one ought to have them in contemplation as
being so affected when directing one’s mind to the acts or omissions in question’ (Oxford Reference, 2017).
In the context of Data Privacy Impact Assessments, the Article 29 Working Party also explicitly references the need to conduct
assessments as an iterative process where individual steps may need to be repeated as ‘the development process progresses
because the selection of certain technical or organizational measures may affect the severity or likelihood of the risks posed by
the processing’ (Article 29 Working Party, 2017). The same is true of Ethical Impact Assessments. As more information is learnt
about the nature and purpose of processing and the potential complexities or social issues that might arise, it is likely that the
process will need to be revisited to reassess decisions taken.
In such a context, it is important to refer back to the original statement of business need and approach, determine if the ethical
risks identified in relation to the proposed processing are appropriate in the context of the business need, and update the
statement of business need and approach or your assessment of risk, as appropriate. As such, it is very important to define
and capture the initial statement of business need and approach in a structured format (Table 10.4).
Table 10.4: Business need and approach template
We want to Describe the information-processing activity that is the subject of the Ethical Impact Assessment. This should be
sufficiently detailed to help you and your colleagues assess and identify potential ethical challenges
So that we can Describe the organization capability that the processing is intended to provide or the social problem that the information
processing is intended to address
Which will deliver the
following benefits
Describe the intended benefits to the organization and to stakeholders. Focus should be on the outcomes in terms of
information and process outcomes and the impact of those outcomes
To the following
stakeholders
Describe the stakeholders, both internal and external, who it is intended will benefit from the proposed processing activity
or whose data will be processed as part of this activity
In our example of the life-logging application for people suffering from memory loss or other cognitive impairment, the statement
of business need and approach would look something like in Table 10.5.
Table 10.5: Example statement of business need and approach
We want to Provide a 24/7 recording capability through audio and video recording using wearable and smartphone applications. We will
use machine-learning processes to categorize and tag ‘memories’ with relevant metadata and provide a web-based or app-
based search portal for users
So that we can Provide a prosthetic memory by delivering a searchable repository of interactions and events that the user will have been party
to
Which will deliver
the following
benefits
This will allow people with a cognitive or memory impairment to operate and live more independently through the provision of
memory prompts or confirmation evidence for events they may misremember or forget entirely
To the following
stakeholders
The persons with cognitive impairment, their families and friends, third parties who may have their image or other data
recorded or stored, medical practitioners, carers
Step 2: Analyse Information Environment
In this phase of the framework we gather, compile and analyse information about the current situation and information
environment, as well as the proposed processing. The goal of this phase is to develop an understanding of the landscape the
proposed processing will take place in. The objective is to identify the components of the business, information and technology
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 10 of 18
environment that will need to be aligned, and also to identify the driving ethic of the organization and relevant ethic of society
that will need to be matched to ensure the information and process outcomes meet expectations.
In conducting this analysis, it is useful to consider the environment from four distinct perspectives or ‘compass points’ (see
Figure 10.6):
Social (putting the processing in the context of society and the organization culture).
Technical (considering the technical architecture and design).
Legal (considering the legal issues that might affect the processing).
Moral (considering the ethical and moral dimensions of the proposed processing).
Figure 10.6: The four compass points for Ethical Impact Assessment
It is important to be clear about how you are engaging the ethic of society and seeking to understand their expectations. Key
questions you need to answer at the ‘Social’ compass point include:
What are the attitudes of people in society to the type of processing proposed?
What are the attitudes of people in society to the proposed benefits?
Are the proposed benefits credible to society?
Have you engaged with people to find out this information? Have you used surveys, commissioned research, sought out
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 11 of 18
existing research etc?
What experiences exist in other jurisdictions for similar things?
In the context of the ‘Moral’ compass point, you need to have a structured method to tap into the views of the people in your
organization, and potentially representatives of your external stakeholders. Surveys and facilitated brainstorming can be very
effective techniques to elicit information. One method that we use with clients is a form of silent brainstorming that poses the
ethical question in a structured way.
The utility/invasiveness matrix The method for this is actually quite simple. In a facilitated workshop, you present the
statement of business need and approach to the group. On a post-it note, or using an electronic voting process, each
participant ranks the proposed processing on a scale of 1 to 10 (low to high) along two axes:
Utility ranks the degree to which the proposed processing and its associated information and/or process outcomes will do
good in society or will promote happiness.
Invasiveness is the measure of the level of intrusion into the personal life, relationships, correspondence or
communications of the individual or a group of individuals as a result of the processing activity or the information outcome
or process outcome that is delivered.
Participants are then asked to record their ‘margin for error’ on that ranking scale. This is the level of ‘wriggle room’ that they
think might exist in the application of trade-offs and balancing rights and obligations. It is essential, however, that this part of
the process is done SILENTLY after a discussion of the proposed business need and approach. This is to help avoid group-
think and to allow for individuals to have an opportunity for role taking and to avoid the ranking being dominated by the views
of a single dominant or persuasive voice.
The facilitator should collect the scores and the margins for error and plot these on the four-box matrix in Figure 10.7. Each
respondent’s co-ordinates map out a box indicating their personal ‘moral space’ for the proposed processing. Overlaying each
respondent’s ‘moral space’ on top of each other, the facilitator can quickly identify the zone of consensus. This zone is what the
group who have been taking part indicate is their ‘ethical risk appetite’ for the proposed processing.
Figure 10.7: The utility versus invasiveness matrix
This area of ‘ethical risk appetite’ will sit somewhere in the four quadrants of the utility/invasiveness grid:
High utility–low invasive initiatives are relative no-brainers. The ethical risks are potentially far outweighed by the benefits
to individuals or society.
High utility–highly invasive initiatives need to have additional controls, checks and balances, or other factors considered
to reduce the level of invasiveness or at least provide some level of redress and balance.
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 12 of 18
Low utility–highly invasive initiatives need to be reconsidered to see how the invasiveness can be reduced or the utility
increased.
Low utility–low invasiveness initiatives are not adding any value to society but are unlikely to be causing any great harm.
In the context of our life-logging example, the processing would likely be rated quite invasive but also of high utility and
supportive of human dignity. Therefore, it is important to understand how critical the impact might be on the ability to implement.
Steps 3 and 4: Assess Information Privacy Quality and Business Impact
The next steps in the process occur in parallel. These relate to the assessment of the ‘quality of information privacy’ in the
proposed processing activity. The objective here is to begin the process of risk assessment and evaluation by identifying
critical issues that will prevent the proposed initiative proceeding as initially scoped. Examples of these showstoppers could
include:
lack of a legal basis for conducting the processing;
the proposed scope and scale of processing not meeting the necessity or proportionality requirements under GDPR;
the proposed processing being highly invasive and of limited utility.
In the first instance, our methodology proposes a review of the defined business need and approach to determine if the
identified issues can be remedied through a refinement of or clarification of either the need or the approach to be taken. The
strategy here is to seek to increase utility or reduce invasiveness or restructure the proposed processing to address any
blatant illegality (Figure 10.8).
Figure 10.8: Reviewing business need and approach – the utility/invasiveness goals
Once the business need and approach has been reviewed to determine if it can be amended, the organization should conduct
a second review of the information environment and assess quality of information privacy again, particularly if the
remediation/mitigation resulted in a change to their proposed information architecture or environment, to determine if findings of
that review still hold or if new issues or risks are identified.
For our life-logging scenario, we will assume that there is no critical showstopper. There is no legal issue that is terminal to the
execution of the processing, but there are a range of root causes that need to be identified and mitigated to reduce the
invasiveness of the proposed processing and maximize its utility. For example, is it possible to reduce the level of recording or
increase the awareness of third parties that there is recording taking place?
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 13 of 18
Step 5: Identify Root Causes
For ethical issues that have been identified that are not terminal to the proposed project, it is necessary to identify the root
causes of the issues and gaps identified. This is important as it ensures that the correct remediation is applied to address the
correct problem. It also allows for identification and determination of interim measures that might be applied.
The root-cause identification should be conducted through a facilitated workshop. This may be conducted as an onsite
workshop or as an offsite review and voting cycle.
This phase is grounded on several key assumptions derived from quality management principles:
Any issue identified may have multiple potential root causes.
Addressing one or more issues will reduce the inherent risk, but will leave residual risks to be considered, particularly
where lower-priority root causes are not addressed.
The focus should be on identifying what the root causes are for any individual failure mode (in this case, a privacy-
impacting issue).
To determine the appropriate solution, we need to identify the relevant root cause.
In the context of conducting the assessment, it is important to consider the probability and impact of an issue or risk from the
perspective of the data subject as well. Likewise, the probability of detection should be addressed from both the internal
(existence of an internal detective control) perspective and from the customer/data subject perspective (how easy would it be
for them to demonstrate that the failure mode and root cause existed and impacted on their fundamental rights?).
Quality management techniques for root-cause analysis should be used here, such as ‘five whys’ analysis and fishbone
diagrams. ‘Five whys’ analysis is as uncomplicated as it sounds. It requires you to ask ‘why’ five times about a particular
problem or issue until you have identified what the precipitating root cause is. A fishbone diagram is a tool for clustering those
root causes based on common factors such as people, process, management and technology factors.
Five whys analysis
The five whys analysis method is relatively straightforward. You define your problem statement (in our case, the ethical
dilemma we are faced with) and then you ask ‘Why?’ a number of times to get to the real root cause and the solution that
addresses that most appropriately in light of any constraints that may exist (eg budget). For example, if the problem you are
facing is birds leaving droppings on your car if it is parked outside your house for a while, you might proceed as follows:
Q: Why are birds leaving droppings on my car? A: Because it is parked outside and because birds are well fed.
Q: Why are birds well fed? A: We have fruit trees in the back garden, as do our neighbours.
Q: Why can’t we get rid of the fruit trees? A: Our neighbours like them, as do we.
Q: Why are bird droppings landing on my car? A: It is parked outside with no cover.
Q: Why don’t I buy a car cover or build a garage? A: No space or money to build a garage.
Why don’t I buy a car cover?
Fishbone diagrams
A fishbone diagram (Figure 10.9) is a quality management tool used to cluster common root causes together to help identify the
critical areas of a problem leading to an issue. You write your problem statement at the ‘head’ of the fish and you identify the
contributing areas as the ‘ribs’, with each root cause you identify being an offshoot of one of the ‘ribs’ of the fish.
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 14 of 18
Figure 10.9: An example of a fishbone diagram
Fishbone diagrams are often called ‘Ishikawa diagrams’ because their first use is credited to Kaoru Ishikawa.
For our life-logging example, one potential root cause is the issues arising from the automated processing of people’s data for
the generation of the metadata to support search. It might also be the case that there are concerns about the retention of
identifiable data in the form of video. Therefore, the improvement plans might need to focus on security, disclosure of
processing purposes, and potentially the provision of technology to support redaction of faces in videos.
Step 6: Develop Improvement Plans
Once you have identified the root causes for the ethical issues in your information management processes, you need to make
your plans to do something about it. At this point in the process you are aiming to define requirements for:
processes, procedures and controls;
training and awareness;
ethics mechanisms.
A key requirement of this phase in the process is that the remedial action proposed should be mapped directly to one or more
identified root causes. You should then conduct a business-case analysis on the proposed remedial actions and may choose
not to implement one or more of the proposed actions. Likewise, if the cost of implementing remedial actions of any kind is
prohibitive, it should trigger an immediate review of the originally proposed business need and approach, or your senior
management team need to sign off on the fact that they are choosing to engage in a form of processing that has been
assessed to be unethical, and potentially unlawful.
Even where you fund all possible remediation actions, there will always be a level of risk that the ethic of society and the
expectations of individuals in society will not be aligned with the ethic of the organization and that your information and process
outcomes will not deliver the desired results. This can happen where the public perception and awareness of the impacts on
utility, invasiveness, beneficence and the other ethical characteristics we identified in Chapter 8 differ from the perception of
those values held by the organization and the individuals in the organization. This is similar to the perception of quality in
manufactured goods or information when the customer expects something different to what the manufacturer has produced. To
put it another way, when people become aware of the impacts that can arise due to misuse or abuse of a technology, it may
change the risk calculation for your ethical balancing act.
You should also be clear about who is responsible for delivering the mitigating actions and by when. This is an important audit
and verification control for post-implementation review of the PIA process to ensure that all things that were to be done have
been done. Controls should be designed and defined at this point as they are part of the improvement process. You might not
execute or implement these controls until the end of the remediation phase, but the earlier you consciously begin designing,
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 15 of 18
the earlier you can pilot and test these controls for effectiveness.
In this context, controls can include (non-exhaustive list):
Organizational:
– training;
– implementing governance controls;
– revised policies and procedures.
Technological:
– implement user access controls;
– detection and logging of access to data;
– data masking or anonymization/pseudonymization technologies.
Customer/user facing:
– changes to data privacy statements/notices;
– changes to how information is presented and communicated about data-processing activities;
– provision of controls for data subjects regarding the exercise of their rights.
Society facing:
– lobbying for legislative change;
– educating mass market on benefits (utility) of processing.
Step 10: Communicate
We skip straight to Step 10 at this point because assessment is not concerned with remediation, although this step is common
to both high-level phases. This step is a key supporting activity across the entire life cycle of the Ethical Impact Assessment
process. It relates to the need to document key findings and outputs during the activity. It is not an ‘end of project’ activity but is
rather an ongoing regular process. It is important to be clear about who the stakeholders are, who you are communicating with
and what their role will be. Are you seeking feedback? Are you seeking direct input? Will you be conducting ongoing testing of
assumptions? Are you just letting people know that you are still in existence?
Communication is a key element internally for driving ethical changes in information management, aligning the ethic of the
individual with the ethic of society. It is also a critical process for ensuring alignment between the ethic of the organization and
the ethic of society. These communication processes will not happen by accident and need to be properly designed and
managed in order to be effective.
Supporting and Extending the Methodology
The European Union has funded research into Ethical Impact Assessments. The aim of the project was to develop a common
EU-wide framework for the ethical assessment of research and innovation. Over four years the project looked at a range of
issues and perspectives on the question of Ethical Impact Assessments. Their outputs provide a useful resource for individuals
and organizations looking to develop their own in-house Ethical Impact Assessment methodology. Among the issues that the
Satori project (Satori Project, 2017) has examined are:
the different types of assessment you might perform;
what types of ethical values, issues and principles might arise in different types of assessment;
understanding the trade-offs that might arise as a result of decisions made in Ethical Impact Assessments.
It is outside the scope of this chapter, and indeed this book, to review and summarize the entirety of the Satori project’s
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 16 of 18
outputs, but it is a valuable reference resource.
Chapter summary
In this chapter we have:
Set out a methodology for conducting Ethical Impact Assessments in an information management context.
This methodology is grounded in proven quality management principles and an established information quality
management framework.
1. What is the value in adopting a structured and standardized approach to Ethical Impact Assessments?
2. What other tools, techniques or methods from quality management might be applicable in this context?
3. There is a strong conceptual link between quality management, information quality management, data privacy and ethical
information management. What are the differences that exist and why are they important?
4. What would you identify as the critical components of a methodology for running Ethical Impact Assessments in your
organization?
Note
1. Tort law is the law of civil wrongs. It is the field of law you litigate in if you have slipped on the wet floor in a shopping
mall. Is it ethical for a shopping mall not to provide adequate notice of the wet floor and appropriate barriers? If you ignore
the barriers and notices, is it ethical that the shopping mall would need to pay for your injuries?
Article 29 Working Party [accessed 5 February 2018] Guidelines on Data Protection Impact Assessment (DPIA) and
Determining Whether Processing is ‘likely to result in a high risk’ For the Purposes of Regulation 2016/679 [Online]
http://ec.europa.eu/newsroom/document.cfm?doc_id=47711
Brey, PAE (2012) Anticipating ethical issues in emerging IT, Ethics and Information Technology, 14 (4), pp 305–17
BSR (Business for Social Responsibility) (2017) [accessed 20 October 2017] Case Study: Telia Company: Human Rights
Impact Assessments [Online] https://bsr.org/en/our-insights/case-study-view/telia- company-human-rights-impact-assessments
Burgess, J. Peter et al (2018) [accessed 5 February 2018] EDPS Ethics Advisory Group Report 2018 [Online]
https://edps.europa.eu/sites/edp/files/publication/18- 01-25_eag_report_en
De Hert, P, Kloza, D and Wright, D, eds (2012) [accessed 5 February 2018] Recommendations For a Privacy Impact
Assessment Framework for the European Union; Brussels – London, November [Online]
http://www.piafproject.eu/ref/PIAF_D3_final
Moor, JH (2005) Why we need better ethics for emerging technologies, Ethics and Information Technology, 7 (3), 111–19
Nissenbaum, H (2009) Privacy in Context: Technology, policy, and the integrity of social life, Stanford University Press,
Stanford
Pasquale, F (2016) The Black Box Society: The secret algorithms that control money and information, Harvard University
Press, Cambridge, MA
Satori Project (2017) [Online] http://satoriproject.eu/
Telia (2017) [accessed 20 October 2017] Human Rights Impact Assessment – Telia Sweden [Online]
http://www.teliacompany.com/globalassets/telia-company/ documents/sustainability/hria/human-rights-impact-assessment- telia-
sweden
Vallor, S (2016) Technology and the Virtues: A philosophical guide to a future worth wanting, New York, Oxford University
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 17 of 18
Press
Wright, D (2011) A framework for the ethical impact assessment of information technology, Ethics and Information Technology,
13 (3), 199–226
Article 29 Working Party (2017) [accessed 20 October 2017] Guidelines on Data Protection Impact Assessment (DPIA) and
Determining Whether Processing is ‘likely to result in a high risk’ for the Purposes of Regulation 2016/679 [Online]
http://ec.europa.eu/newsroom/document.cfm?doc_id=44137
Cavoukian, DA (2011) [accessed 20 October 2017] Privacy by Design: The 7 Foundational Principles [Online]
https://www.ipc.on.ca/wp-content/uploads/Resources/7foundationalprincip les
Cavoukian, DA (2017) [accessed 20 October 2017] AI Ethics by Design [Online]
http://www.ryerson.ca/content/dam/pbdce/papers/AI_Ethics_ by_Design x
Dennedy, M, Finneran, TR and Fox, J (2014) The Privacy Engineer’s Manifesto: Getting from policy to code to QA to value,
Apress, Berkeley, CA
Holmberg, I, Ahlberg, M and Romberg, A (2017) [accessed 20 October 2017] Telia Company – Paving the Way for Responsible
Business [Online] https://www.hhs.se/contentassets/6932d66acb534542aa0f4acc48fe83f3/ rt-telia-final-october-9-2017
Juran, J (1986) [accessed 20 October 2017] The Quality Trilogy: A Universal Approach to Managing for Quality [Online]
http://pages.stern.nyu.edu/~djuran/trilogy1
McGilvray, D (2008) Executing Data Quality Projects: 10 steps to quality data and trusted information, Morgan Kaufmann,
Boston
Oliver, I (2014) [accessed 20 October 2017] Privacy Engineering: A data flow and ontological approach, CreateSpace
Independent Publishing Platform
Oxford Reference (2017) [accessed 20 October 2017] Neighbour Principle [Online]
http://www.oxfordreference.com/view/10.1093/oi/authority. 20110803100227619
Peters, T (2015) [accessed 20 October 2017] Management … the Arrangement and Animation of Human Affairs in Pursuit of
Desired Outcomes [Online] http://tompeters.com/wp-content/uploads/2016/08/Management_ collective_behavior_032215A-
1
Satori Project (2017) [accessed 20 October 2017] Satori [Online] http://satoriproject.eu/
Trevino, LK (1986) Ethical decision making in organizations: a person-situation interactionist model, The Academy of
Management Review, 11 (3), pp 601–17
Ethical Data and Information Management: Concepts, Tools and Methods
Reprinted for ZPRAL/r44167147, Purdue University Global Kogan Page, Katherine O’Keefe and Daragh O Brien (c) 2018, Copying Prohibited
Page 18 of 18
What Will We Cover in This Chapter?
Introduction
Looking for Parallel Models
Privacy by Design
Privacy Engineering
The E2IM Ethical Impact Assessment Model
Questions
Note
Further Reading
References
Select your paper details and see how much our professional writing services will cost.
Our custom human-written papers from top essay writers are always free from plagiarism.
Your data and payment info stay secured every time you get our help from an essay writer.
Your money is safe with us. If your plans change, you can get it sent back to your card.
We offer more than just hand-crafted papers customized for you. Here are more of our greatest perks.
Get instant answers to the questions that students ask most often.
See full FAQWe complete each paper from scratch, and in order to make you feel safe regarding its authenticity, we check our content for plagiarism before its delivery. To do that, we use our in-house software, which can find not only copy-pasted fragments, but even paraphrased pieces of text. Unlike popular plagiarism-detection systems, which are used by most universities (e.g. Turnitin.com), we do not report to any public databases—therefore, such checking is safe.
We provide a plagiarism-free guarantee that ensures your paper is always checked for its uniqueness. Please note that it is possible for a writing company to guarantee an absence of plagiarism against open Internet sources and a number of certain databases, but there is no technology (except for turnitin.com itself) that could guarantee no plagiarism against all sources that are indexed by turnitin. If you want to be 100% sure of your paper’s originality, we suggest you check it using the WriteCheck service from turnitin.com and send us the report.
Yes. You can have a free revision during 7 days after you’ve approved the paper. To apply for a free revision, please press the revision request button on your personal order page. You can also apply for another writer to make a revision of your paper, but in such a case, we can ask you for an additional 12 hours, as we might need some time to find another writer to work on your order.
After the 7-day period, free revisions become unavailable, and we will be able to propose only the paid option of a minor or major revision of your paper. These options are mentioned on your personal order page.