Ethics of Generative AI

Embodying Ethical Principles in the Design of Deep-Learning AI Systems
Project
Timeline
Role
Team
Tool(s)
Research Thesis
8 weeks
UX Researcher
UX Strategist
Solo contributor
Figma
Miro
Atlas.ti
Notion

WHAT's THE Project ABOUT?

I focused my thesis study at SCAD on integration of ethical principles in the design of deep-learning AI systems to build trust by applying the perspective and practice of design management.

THE Challenge

AI Ethics on the Back Burner:
While AI is becoming more ubiquitous and powerful at a blazing speed, the hidden implications of its mass adoption are concerning. Currently the creators of AI systems are not incentivized enough to factor in the risks and integrate ethical perspectives early on.

SYSTEMIC Intervention

Collegial and Paced Effort:
Through extensive research and collaboration with industry experts, I developed & validated three conceptual models for implementing AI ethics across LLMs, AI products, users, tech companies, and research organizations. I then developed a framework that embeds ethics within deep-learning AI systems at an early stage, and empowers organizations to create trust in the use of these systems.

UX OUTCOME

3
Actionable Models
4
Levels of Innovation
2
Levels of Validation

UX IMPACT

Enhanced User Trust
Incentivized Collaboration
Competitive Differentiation

CONTEXT

After the disruptive release of ChatGPT by OpenAI, things have not been the same. Advanced algorithms are catering to many more interesting use cases.

However, the revolutionary convenience that these systems bring to our lives is accompanied by a dark cloud of incidents and controversies. Major potential risks include perpetuation of structural discrimination, replication of existing biases, infringement of personal privacy, and making unpredictable decisions. Regulations also fail to match up the pace of innovation.

Local Bakery
Global IT Consulting Giant
One Bookkeeper
15000+ Account Owners
~$50,000 Monthly revenue
~$500 Million Monthly Revenue
As I was catching up on an array of impressive AI tools, I kept wondering:
Are we ready for what's coming next?
Do we even know what's coming next?

MY ROLE

As the solo researcher / design manager on this study, I led the following activities:

1 - framing the problem
2 - carrying out research
3 - analyzing the findings
4 - market benchmarking
5 - mapping the ecosystem

6 - defining the design criteria
7 - building a concept catalog
8 - validating the ideas
9 - prototyping to market
10 - devising an implementation plan


Starting from the ground up, I took charge of:

1 - mapping user context and needs
2 - sharing domain knowledge within the team
3 - syncing with tech architects to design user flows
4 - translating research into dashboard concepts
5 - handing off wireframes to visual designers
6 - infusing business goals into UX iterations
7 - collaborating with business analysts to develop user stories
8 - storyboarding opportunities of differentiation
9 - driving user testing/feedback sessions
10 - pitching the UX impact to the CFO executives

The Process

I employed the principles of holistic design thinking process with a stronger emphasis on research.

Tools: Photoshop, Sketch, Marvel, Zeplin
This is some text inside of a div block.
RESEARCH
Modeling the playfield

Our six-step SEO process.

That will ensure a great return on your investment and gain you a competitive edge in search results.
Research Space Map
‍‍
Eighty-five percent of consumers say that it is important for organizations to factor in ethics as they deploy AI to tackle society’s problems.
Exploring Research Paths

Mapping the Actors

Motive: Identifying key players and their relative stake in the system
Medium: Preliminary study
Mapping: Stakeholder map


The primary targets for this project are US-based tech leaders, managers and researchers building generative AI systems.

The secondary targets are AI designers, developers, data scientists as well as tech influencers (such as AI experts).

Stakeholder Map

Sketching a map of Entities, Relationships, Attributes and Flows helped visualize the relationships and interactions between different stakeholders, and revealed the dynamics of power and influence.

Early ERAF Map

Primary Research

Primary Research Plan
Turns out, users are not explicitly aware and/or concerned about the repercussions in the use of AI systems. In today's context, non-users seem to be more affected than users.



I conducted semi-structured interviews to examine how the practices of the participants align with AI ethics principles.

Each participant represented a cross-section of experience and responsibility. It was a fine mix of thinkers (strategy) and doers (creation).


Primary Interview Participants
Anchors of Discussion

Affinity Mapping

Affinity Mapping

Pain Points

There was a sharp contrast in the challenges faced by the users of AI systems, as against the tech companies developing them.


Concerns With Generative AI: Users v/s Orgs


Organizations had a variety of internal as well as external reasons that led to them deprioritizing ethics in the bigger scheme of things.

SYNTHESIS
Integrating
the findings
Survey Findings
Stakeholder Interviews: Snippets


Early inputs from the stakeholders confirmed my belief that designers & researchers can play a pivotal role in developing ethically sound AI systems.

Insights



Emerging Needs of AI as well as the Intervention



From the data I gathered in my interactions, it appeared that within a company, different teams have an assumed mindset when it comes to responsibility of AI ethics. For instance, a designer from a big tech company presumed that ensuring AI ethics is primarily the job of the legal teams.

Map of Assumed Responsibility of Ethics Within a Company

In order to move the needle, it is important to understand and map out the system boundaries. Within this radar, stakeholders such as researchers, policymakers, and industry experts can collaborate to develop ethical guidelines, robust evaluation frameworks, and regulatory policies that promote accountable use of generative AI, ensuring their safe and beneficial deployment in society.

Stakeholder Ecosystem Outline

Innovation Ecosystem

To visualize how policymakers and stakeholders to make decisions in foming regulations and taking initiatives, an innovation ecosystem map served as a valuable tool.


The map was then abstracted to only focus on the flows and relationships among the entities. This way, I could trace two underlying systems - one within the boundaries of tech orgs, and the other outside of them.

Innovation Ecosystem: Abstracted

Current Measures

I expanded upon the ecosystem map by labelling various measures already being taken at the level of each stakeholder, so that my ideas do not turn out to be redundant.


This exercise exposed an important insight:

All the heavylifting of AI ethics is currently on the shoulder of the tech developers alone. May be there is an opportunity to balance the load within the system.
MARKET ANALYSIS
Opportunity Hunting

While there are plenty guidelines for AI Ethics, I was on the lookout for those specifically targeted towards designers and researchers.

The ones in the top right quadrant was the ideal spot for this study to gain a perspective of similar market players and solutions.

2x2 Plot of Current AI Ethics Guidelines / Tools / Models / Frameworks for Designers and Researchers

Blue Ocean Canvas

For competitor analysis, I deployed blue ocean canvasing to benchmark ideal attribute and highlight potential opportunity areas.
‍‍



The untapped growth areas exposed themselves, using which I crafted the trajectory of my design innovation.‍

Reframed Opportunity Statement

At the beginning of this project, my opportunity statement focused onstrategies specifically for Designers and Researchers of deep-learning AI  to embed ethics early on in the development. But after analyzing the data from primary and secondary research, I learned that even if designers/researchers are in a position to make ethically sound decisions, the change needs to happen in a broader, cohesive and continuous manner.

To propose a cohesive intervention which equips the companies leveraging generative AI to integrate and propagate ethics at all stages of development, thereby building transparency and sustainable trust among all AI users.


There are handy guides and toolkits for designers and researchers to practice AI ethics. But in times like this, the pressing need is for AI makers to have a tailored, reliable and collaborative framework for their specific use of generative AI technologies.

Opportunity Mapping Matrix
DESIGN
Models of change

Design Criteria

Taking cue from all this synthesis and  the desired attributes of the ethical AI system, I listed out the criteria against which I would brainstorm conceptual models.

Because it was hard to meet all of them in one shot, I categorized them into: 
must-have, should have and nice to have.

Concept Catalog

In the world of AI, it is crucial to segregate utility from experimentation or demo purposes. The usage of large language models by product companies can be likened to Lego building blocks or making your own salad. You can subscribe to and acquire the specific pieces you need to create something unique.


This approach would ensure a more controlled, targeted and conscientious use of AI resources, where companies collect nuanced data from users who genuinely require the technology.

Additionally, as a mechanism to prioritize ethical practices, companies may lose access to these subscriptions in case they don't comply with ethical standards of deploying generative AI.



This shift also allows for more focused training of models and ensures that access to AI is not wide open for random purposes. As a result, many interesting use cases can emerge, encouraging companies to collaborate rather than engage in competitive battles. Ultimately, this collaboration serves to strengthen the algorithms powering AI advancements.




The second approach embraces a combination of enhanced product transparency with user discretion and control.


To make interactions transparent, full disclosure about data collection is crucial (what, how, why, for how long). Just like the symbols used to identify vegetarian from non-vegetarian food, NFT watermarking can be utilized to identify:
(a) if AI was used to generate an output,
(b) to what extent, and
(c) if the AI model is in experimental stage.

Providing decision maps can help users understand the rationale behind the outcomes. Additionally, AI tools can offer three outputs for users to choose from, depending on varying degrees of accuracy, fairness, personalization, privacy, and traceability.

Another way to boost trust is to disclose the reciprocal value to users for their data being shared, and offer creative ways to share data (e.g., in exchange for money or other benefits). Conducting Reverse Wizard of Oz and Bug Bounty trials can be fun ways for users to participate and share feedback.



At the other end, users have stronger control in terms of consent, opting-out, purging the data collected, reporting or flagging any issues, recording if the output surprised them, and engaging in community forums to share experiences.






Companies leveraging the generative AI wave have various blockers in prioritizing and integrating ethical practices. These include arranging necessary resources, developing their own ethics models, aligning the organization on ethical standards, modifying data collection & storage practices, and ensuring that their IP (secret sauce) doesn't get leaked.





Current landscape of generative AI calls for a participatory model, leveraging the strength that multiple entities bring, to accurately establish, assess and measure ethical considerations at all stages of development.


By recognizing the pivotal role of all stakeholders, we can achieve AI systems that are not only technologically advanced but also ethically sound, ultimately fostering trust, accountability, and positive societal impact.


The concept of symbiotic framework sits rightly positioned at the periphery of the ecosystem, between the micro and macro levels. Making changes at the meso level offers the advantages of broader impact, systemic change, improved efficiency and coordination, leveraging collective resources, scalability, and facilitating individual-level change. It provides an opportunity to create lasting and meaningful improvements within organizations and communities.


A quick Impact vs Time matrix helped me identify which of the three concepts would be the most fruitful to implement in the short vs long term.


3 Concepts: Impact vs Time Plot

Early Prototype

Presented below is a high-level symbiotic model that involves key stakeholders and calls out their roles, responsibilities and mutual interactions.

VALIDATE
Improving upon the ideas

Concept Testing Protocol

3 concepts2 participantsVirtual meetQualitative feedback


The participants analyzed each concept in depth and came up with benefits and challenges of each one of them. These inputs further guided me to improve them and propose an implementation plan.


Based on time taken for each concept to be implemented, perceived benefits of the one already in execution and the potential impact of scale, I identified the actionable next steps.


Concept #2 Validation: Google Bard

Disclosure of Experimental Stage AI
Offer Multiple Outputs of Varying Relevance
Stronger User Control
PACKAGE
Bringing it
all together

Value Proposition

It was time to tie the loose ends and clearly demarcate the value offered by the symbiotic framework for generative AI ethics.

Branding Moodboard

Along side system modeling, I did a little branding exercise to communicate how some of these ideas would work in action.

‍‍

Not Human: Usage Sample

Framework


The symbiotic framework approach invites a strong partnership between companies and research institutes, not-for-profit organizations and auditing agencies.


The collaboration allows companies to focus on innovating, by outsourcing the nitty gritties of AI ethics integration to reliable experts and neutral parties.

This way, companies are incentivized to save money, effort and time, preserve their IP, receive customized solutions and gain user trust.

Implementation Plan

Here is a tentative plot of the three concepts on a timeline that is in absolute years but also relative to the other concepts.















REFLECTION

One of our core responsibilities as designers is to understand the ethical impact of what we create. Designing for users' trust is a valuable skill, regardless of the problem being solved. I saw a great opportunity to figure how designers and researchers can tackle ethics during (and not after) the developmental stages of Generative AI. This experience has equipped me with a strong sense of commitment to design products that prioritize user needs and well-being.

Limitations

Challenges In This Journey

Open Questions

Closing Thoughts




More Projects

Let’s talk business

I love making connections, so feel free to reach out and say hello!

Need a systems thinker who can drive business outcomes?

Or have thoughts to share on my work?

Either way, I love making connections. Let's get talking!

LinkedInGmailCalendly
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Let’s talk business

I love making connections, so feel free to reach out and say hello!

Need a systems thinker who can drive business outcomes?
Or have thoughts to share on my work?

Either way, I love making connections.
Feel free to connect via LinkedIn, email me, or schedule a time on my calendar.

LinkedInGmailCalendly

Let's get talking!