Chapter Contents

Applying AI to Your Workflows

An Agentic Strategy

Workday Agent System of Record

Workday and a Differentiated AI Foundation

Embedded AI: The Architecture of Workday Illuminate

The Workday Approach to Protecting Your Data with AI Security

Responsible AI: The Key to Sustainable AI Innovation in Business

The Workday Responsible AI Program: Building Trust to Create Real Value

CHAPTER 3

Workday Illuminate™: Transform work with AI.

AI is transforming how we live and work. By automating mundane tasks and supporting faster decision-making, AI empowers people to reach their full potential at work. At Workday, we believe successful AI requires a strategic, human-centered approach. This means focusing on clear, value-oriented goals; high-quality data; and a clean user experience while prioritizing security, privacy, and explainable AI. We’ve been investing in AI since 2014, beginning with our acquisition of Identified, an analytics-based recruiting software company, and we continue to expand our strategy as AI evolves.

CHAPTER 3

Workday Illuminate™: Transform work with AI.

AI is transforming how we live and work. By automating mundane tasks and supporting faster decision-making, AI empowers people to reach their full potential at work. At Workday, we believe successful AI requires a strategic, human-centered approach. This means focusing on clear, value-oriented goals; high-quality data; and a clean user experience while prioritizing security, privacy, and explainable AI. We’ve been investing in AI since 2014, beginning with our acquisition of Identified, an analytics-based recruiting software company, and we continue to expand our strategy as AI evolves.

Applying AI to your workflows.

Workday Illuminate™ is the next generation of Workday AI, and its value will play out in numerous real-world scenarios. Instead of relying on general-purpose large language models (LLMs) trained on public internet data, which can lead to inaccurate and irrelevant results, Illuminate relies on a massive, constantly growing HR and finance dataset with 800 billion transactions from more than 70 million end users. Illuminate accelerates your workflows, assists your teams, and transforms your business. This combined approach unlocks even greater benefits and maximizes your ROI.

Accelerate.

Automate manual processes in Workday and enhance productivity using traditional machine learning and Gen AI. This intuitive form of AI helps users complete tasks faster, helping to increase productivity and save on costs so they can focus on higher-level work. Examples of using Gen AI capabilities to accelerate processes include speeding up content creation and summarizing job descriptions, knowledge articles, and contracts. This category also automates tasks such as anomaly detection, resource planning predictions, and receipt scanning powered by optical character recognition (OCR).

Assist.

Simplify getting work done inside of Workday. Using a combination of traditional machine learning and Gen AI, Workday provides a conversational, context-aware experience. Receive continuous natural language assistance in the flow of your work, at the right time. For example, Workday Assistant can help you quickly find information and either take action or offer insights and recommendations for better decision-making. This empowers you to focus on the more strategic and creative work that is best suited for humans.

Transform.

Completely transform traditional business processes with AI orchestration. Here, AI isn’t just used to speed things up—it fundamentally changes processes and how people support them. For example, AI agents can anticipate and autonomously perform tasks in the background to supercharge human decision-making.

An agentic strategy.

The Workday vision for AI agents transcends simple task completion.

Our agentic strategy paves the way for a future where Workday—with our partners and customers—builds role-based agents that each contain a configurable set of skills to support people in their jobs. These agents have the potential to deliver transformational value beyond automating individual tasks. Let’s look at three examples that illustrate how AI agents are transforming critical business processes.

Recruiting agent.

0A recruiter’s goal is to attract and acquire top talent for an organization, but much of their time is spent on sourcing and qualifying candidates. A recruiting agent can streamline and enhance the recruitment process by autonomously managing candidate sourcing and screening. This frees recruiters to focus more on the human aspects of recruiting, such as relationship management and engaging with candidates and hiring managers.

For example, a recruiting agent can:

  • Autonomously pull qualified candidates for any new job opening from an organization’s existing candidate pool and add them to the short list
  • Scan resumes to extract key information, such as skills, work history, and education
  • Automatically screen out candidates whose resumes do not meet the minimum criteria
  • Produce a prequalified short list of candidates for review
  • Autonomously schedule interviews for qualified candidates while coordinating with the interview team’s schedule
  • Collect key insights throughout the recruiting process and surface metrics, highlighting areas for improvement

Shift scheduling agent.

Shift scheduling is crucial for organizations that manage frontline workers to ensure adequate staffing coverage while avoiding burnout. With a shift scheduling agent, organizations can use AI to automate much of the process, reducing errors and improving staff coverage. By orchestrating multiple AI models, a scheduling agent ensures the right staff is in the right place at the right time—potentially reducing overtime costs, improving store efficiency and customer satisfaction, and increasing employee happiness.

For example, in a retail setting, a shift scheduling agent can:

  • Analyze historical data, inventory outflow, and other factors to forecast staffing needs
  • Match staff availability, roles, and expertise with the forecasted demand so each shift has the necessary coverage
  • Check for compliance with labor laws, such as maximum work hours and mandatory rest periods, and ensure fair distribution of shifts among employees
  • Quickly recommend schedule adjustments to the shift manager in order to reallocate resources to maintain optimal coverage when there is an unexpected absence or sudden change to in-store traffic
  • Continuously collect data on staff performance, department performance, and employee scheduling preferences, and use this information to improve and refine future schedules

Quarter-end close agent.

Closing the books requires flawless execution across numerous tasks. An AI agent can streamline and accelerate the closing process, minimizing errors, ensuring regulatory compliance, lowering operational costs, and providing real-time analysis for better decision-making.

For example, a quarter-end close agent can:

  • Automatically compile financial data from bank statements, expense reports, and other financial documents and reconcile transactions across multiple accounts, identifying and flagging discrepancies for further review
  • Categorize expenses and revenues, matching them to the correct accounts and linking invoices to corresponding purchase orders and receipts, ensuring that all expenditures are accurately recorded and verified
  • Create and post routine journal entries, such as those for depreciation, amortization, and accruals, ensuring they comply with accounting standards and providing suggestions for correcting anomalies and errors
  • Perform continuous audits of financial data to quickly identify potential issues or fraud, ensuring all financial records adhere to relevant regulations and accounting standards
  • Generate key financial statements such as balance sheets, income statements, and cash flow statements so they’re ready for review by management and external auditors
  • Conduct variance analysis, comparing actuals to forecasts and providing insights into any deviations
  • Manage the closing task workflow, assigning responsibilities to team members and tracking the progress of each task
  • Facilitate the review and approval process by providing detailed reports and insights to finance managers and ensuring all necessary steps are completed before finalizing the close

Workday Agent System of Record.

AI agents can perceive details of the surrounding environment; process and reason through complex, multistep tasks; and execute tasks to achieve specific goals. This allows employees to focus on tasks that benefit from uniquely human traits such as conversations that require empathy and emotion. As more AI agents are created, your digital workforce will expand and require management, just like your workforce of employees and contractors.

Since our beginning, Workday has been making it easier to manage your workforce, including your employees, your contingent labor, and now, your digital workforce. Workday Agent System of Record provides a single source of truth for building and managing AI agents used across your workforce. The Workday Agent System of Record command center governs, manages, audits, and monitors AI agents so IT and business leaders can see how agents are collaborating with people, impacting work, and creating measurable value.

Key capabilities of Workday Agent System of Record include:

  • Agent development: Build and customize agents leveraging Workday Extend, enabling you to easily integrate with leading AI platforms and third-party agents.
  • Centralized management: Bring all your agents, regardless of their origin, into a single system of record so you can more easily register, monitor, manage, and govern your entire digital workforce.
  • Agent onboarding: Swiftly onboard agents for immediate productivity with skills definitions, role assignments, and appropriate access to company-specific knowledge.
  • Agent orchestration: Connect Workday and third-party agents to collaborate and take actions toward common goals, such as connecting disparate data sources and sharing tools to support complex reasoning.
  • Agent deployment: Seamlessly deploy agents with automated configuration, access control, and policy enforcement for a secure and compliant launch.
  • Agent UX with a human-in-the-loop approach: Incorporate agents smoothly into the natural flow of work and provide human oversight by using Workday Assistant as the primary interface to interact with agentic capabilities.
  • Governance and reporting: Ensure compliance and transparency with comprehensive reports, audits, and clear performance insights.

As Workday, our customers, and our partners all develop more AI agents, our agentic strategy will continue to evolve. Visit AI Agents for the latest updates.

Workday and a differentiated AI foundation.

Figure 3-1. The AI district on the city map.

To bring the Workday Illuminate vision to life, Workday has embedded AI in our core application architecture, as illustrated in the city map. This allows AI solutions to be deployed directly within Workday applications without requiring a separate “build-your-own” tool bench that necessitates IT procurement, customization, and maintenance. We leverage a unified AI technology foundation across our platform, enabling us to build and deliver AI across many HR, financial, and procurement use cases and seamlessly uptake new AI capabilities without rearchitecting or rebuilding applications. This also ensures our AI can scale to all Workday customers simultaneously without complex upgrades, lengthy implementations, or additional costs. To start realizing value, customers simply toggle on AI features from a single dashboard within their tenant, which transparently displays the data used for each AI feature.

But even with the best technology foundation, AI is only as powerful as the quality and relevance of its data. To further differentiate the value of Illuminate, Workday has the largest, cleanest set of HR and finance data in the world. Our 70M+ end users produce 800B+ annual transactions (all in a uniform data model), leading to more accurate, relevant, and organization-specific outcomes.

With the right data, the next step is feeding business context into AI to unlock opportunities to improve individual processes and ultimately transform entire business functions. With 72M+ monthly business process events, Workday understands the past and present business context surrounding our data—including context about business processes, tasks, data, user information, and conversational prompts.

Next, we’ll walk through the actual Illuminate architecture before finishing with a look at how we apply ethical standards to Illuminate through responsible AI (RAI).

Embedded AI: The architecture of Workday Illuminate.

Workday has developed a robust, flexible, and secure architecture for AI development, testing, and deployment. This chapter explores the different facets of the AI architecture in Workday, describing how each part works and why it’s important. The architecture’s core sections are:

  • LLM gateway
  • Model catalog
  • LLM inference and fine-tuning
  • Retrieval-augmented generation (RAG)
  • Data access, pipeline, and storage
  • Client access (XpressO, Workday Extend/Workday AI Gateway)
  • GenAI Studio and LLM Playground

Workday Illuminate Architecture

Figure 3-2. Workday Illuminate architecture.

Key terms.

Before we dive into how the architecture works, let’s define some common terms.

  • Foundational model: A large language model (LLM) trained on a massive dataset to understand and generate human language. Think of it as the base AI “brain.” Examples include OpenAI GPT-4o and Google Gemini Pro 1.5.
  • Parameter: A numerical value that controls a specific aspect of the model’s behavior. These models have millions or even billions of parameters, allowing for fine-grained control over their capabilities.
  • Training: The process of “teaching” the foundational model using a vast amount of text data. This is how the model learns to understand and generate language.
  • Fine-tuning: Adapting a foundational model for a specific task by training it further on a smaller, more focused dataset. This is like specializing the model’s skills.
  • Use case: A specific application of AI within Workday, powered by a fine-tuned model. Examples include Workday Q&A generation and Workday document intelligence.
  • AI adaptor: A customized version of a use case, fine-tuned to meet a customer’s specific needs.
  • Prompt: The input given to an LLM, which can be a question, a command, or a piece of text.
  • Chunking: Breaking down large documents or datasets into smaller, more manageable pieces for the AI to process.
  • Grounding: Verifying that the LLM’s output is accurate and relevant to the given prompt and context, ensuring the AI stays on track.

Now, let’s explore the AI architecture in Workday.

The architecture of Workday Illuminate: The LLM gateway.

The LLM gateway is the central hub of the Illuminate architecture, providing a set of services and a single point of access for all Workday developers and customers. It offers the necessary DevOps infrastructure, including the latest AI models and tools, enabling anyone to leverage AI seamlessly. The gateway provides demo environments, deployment environments, and APIs for external integrations, making it easy for developers to test and tune for new feature development. It enables XpressO to access any deployed AI technology. Finally, the LLM gateway allows Workday to control access to data and models, preventing data leaks and malicious access while minimizing external costs for third-party LLM access.

The LLM gateway handles the enablement of Workday tenants, allowing every customer to access the gateway across their Workday deployment. It handles resource allocations, allowing internal services to be dynamically scaled with demand. The gateway includes extensive logging for comprehensive audits. It also exposes APIs so developers and customers can use tools like Workday Extend and XpressO to integrate AI into all parts of the Workday platform.

AI model catalog: Powering Workday with advanced AI.

With the rapid development of new AI models, the model catalog is the most dynamic part of the AI architecture. All models are accessible through the LLM gateway, providing everyone developing AI functionality—from within Workday to third-party providers—with full access to the latest models from a central location. This includes generative LLMs and machine learning algorithms used across the Workday platform for tasks such as statistical analysis and predictive reasoning.

The catalog also provides third-party tools, such as Llama Guard, used for content moderation. This ensures the input and output of generative LLMs are within the bounds of acceptable language. Additionally, it includes analytics and evaluation tools to help quantify the accuracy of models as they are trained and tuned.

To ensure the responsible use of AI, the model catalog incorporates grounding or post-processing logic. This ensures model output aligns with expectations. For example, if a customer uses a generative LLM request within the context of a private legal contract, the LLM gateway ensures that the generated and presented output of the LLM is actually present within the contract in question. This process limits hallucinations and malicious prompt injections. While Workday does at times use third-party models, sensitive customer data is never sent externally. In such cases, the models are self-hosted within the LLM gateway for maximum security.

LLM inference and fine-tuning for efficient AI.

While the model catalog is the most dynamic aspect of Illuminate architecture, LLM inference and fine-tuning are arguably its most crucial facets. This process manages data sovereignty, accuracy, and efficiency. Using a foundational model, often containing billions of parameters, to produce output for every task is costly and time-consuming. For example, it’s inefficient to use massive computing power to work through an 80-billion-parameter LLM just to get small details from a 250-page legal contract. Additionally, providing a public or third-party model with sensitive data such as a legal contract could pose security risks.

To address this, Workday leverages the foundational models from the model catalog and fine-tunes them for smaller, more specific use cases. This reduces the number of parameters from billions to millions, creating more-efficient models tailored to specific needs and contexts. Customers further refine these models with their own proprietary datasets, securely isolated in their own tenant, using retrieval-augmented generation (RAG)—more on this in the next section. This creates a unique, isolated instance called an AI adapter.

AI adapters are loaded into active memory for specific use cases. Each use case has a dedicated computational node with its own memory and GPU resources. These nodes operate in a dual, active-active configuration for automatic failover protection. The system also scales up additional nodes based on demand.

Fine-Tuning

Figure 3-3. Manage accuracy, efficiency, and sovereignty with fine-tuning.

In essence, the AI adapter is a smaller, highly tuned model that can be quickly loaded and processed and then removed from the active memory. Importantly, the customer’s sensitive data always remains in their control within the RAG storage.

Retrieval-augmented generation.

RAG enhances large language models by incorporating specific documents to narrow the context of a prompt response. This is essential for handling sensitive information. For example, if a customer wants to ask a question about a batch of stored invoices, they wouldn’t be able to go to a public LLM such as ChatGPT. This would expose internal data to external systems, creating a security risk.

The RAG process involves four key steps:

  1. Prompt input. The user inputs the question to the LLM.
  2. Chunking. The system identifies and filters the relevant sections within the documents.
  3. Response generation. The AI adapter processes the prompt within the context of the selected data and generates a response.
  4. Grounding. The response is filtered and returned with citations indicating the source document(s) used to answer the question, ensuring transparency and accuracy.

Data access, pipeline, and storage in Workday Illuminate.

When it comes to AI and your data, three key questions arise:

  1. Where is the data stored?
  2. Who has access to the data?
  3. How is the data used?

Workday prioritizes data security and privacy. All customer data is stored in isolated tenant-specific buckets on Amazon Simple Storage Service (Amazon S3). This architecture ensures data segregation and prevents co-mingling of information between different customers.

To further protect your data, Workday enforces strict access controls:

  • Authorized personnel only. Only authorized Workday employees with specific roles and responsibilities, such as those involved in machine learning operations or customer support, can access customer data. These employees undergo formal training and certifications to handle sensitive information according to strict security and privacy protocols.
  • Full auditability. All data access and processing activities are meticulously logged, providing a comprehensive audit trail for accountability and transparency. These logs track who accessed the data, when they accessed it, and what actions they performed.
  • No duplicate data. When used for AI model training and fine-tuning, data is never copied. This ensures a single source of truth and maintains data integrity within each tenant’s environment. Workday employs techniques such as differential privacy and federated learning to train models on data without copying it.

This rigorous approach ensures that your data remains secure, confidential, and used responsibly within the Workday Illuminate framework.

Building AI applications with XpressO, Workday Extend, and the Workday AI Gateway.

There are two primary ways to leverage the LLM gateway within the Workday platform:

  1. XpressO. This method empowers Workday developers to seamlessly integrate AI features into the core Workday product using APIs. The LLM gateway’s flexible architecture enables these AI features to be built directly into various applications and functionalities across the platform. This means Workday doesn’t have a separate AI product; instead, AI capabilities are woven into the fabric of the Workday platform itself.
  2. Workday Extend with the Workday AI Gateway. This path empowers customers to build their own custom applications and extend the functional power of Workday with AI. While Workday Extend provides a broad range of development capabilities, the Workday AI Gateway specifically allows customers to create new apps that leverage the power of LLMs and other AI models. (Workday Extend is covered in more detail in chapter 6.)

Workday internal AI tools: GenAI Studio and LLM Playground.

GenAI Studio and the LLM Playground are key components of the Workday Illuminate architecture. While not directly accessible to customers, these internal tools are essential for the development of AI within Workday. They provide an environment where Workday developers can experiment with, build, and test new AI features.

By providing access to the latest AI models, tools, and resources, GenAI Studio and the LLM Playground empower developers to:

  • Rapidly prototype and iterate on new and enhanced AI functionalities
  • Explore innovative applications of AI across the Workday platform
  • Develop and deploy AI solutions more efficiently

These tools play a crucial role in the ongoing efforts of Workday to enhance products and services with cutting-edge AI capabilities.

The Workday approach to protecting your data with AI security.

Too often, the conversation about AI and data security assumes a trade-off between the two. After a decade of developing and delivering high-value AI capabilities, Workday has proven this to be a false premise. With the right approach and exceptional engineering, customer data can be secured without compromising the value and effectiveness of enterprise AI.

Workday has created multiple security approaches to AI model training, selecting the most appropriate one for each use case and feature. The approach is determined primarily on the sensitivity of the use case and the data involved, followed by a risk assessment to validate the decision. While every feature has a specific and appropriate security approach, most fall into three categories:

1. Company-specific models.

Some use cases, such as gaining organization-specific insights and detecting anomalies from accounting data (for example, journal insights), involve proprietary and sensitive data where any amount of risk for data leakage is unacceptable. Additionally, there’s no benefit in having an AI model learn from other organizations’ financial data because insights are not transferable or relevant to other customers. In these cases, Workday fine-tunes smaller base models to get tenant-specific answers. Each tenant receives their own isolated model, preventing data leakage between tenants.

2. Fine-tuning on shared large models.

Some HR and finance data is not sensitive (publicly available job descriptions are likely already in LLM training sets). For example, the “generate job descriptions” feature uses a single base model that is fine-tuned on publicly available job descriptions. This model is shared between tenants. The GenAI output is specific to each company and the user-defined parameters of the role. Since only nonsensitive, publicly available data is used for training, the same model isolation is not required. A shared model trained on public data allows for higher-quality output.

3. Shared LLM without full retraining.

Like journal insights, company-specific knowledge bases contain sensitive information unsuitable for training a shared model. However, an LLM is needed to navigate the complexity of multiple sets of unstructured data and generate relevant outputs.

Workday uses RAG, starting with an open LLM fine-tuned with public, nonproprietary data for a specific use case. The data used for training this model is minimized and de-identified. Then, a per-tenant vector database is created with only tenant-specific data to augment the fine-tuned LLM. At runtime, when a user enters a prompt, it is processed against the tenant-specific vector database and the fine-tuned LLM to produce a high-quality, organization-specific output. Only using company-specific data at runtime prevents data leakage between tenants. Additionally, the shared LLM doesn’t combine company-specific data and doesn’t “learn” from the prompts or generated outputs.

Similar approaches apply low-rank adaptation (LoRA) adapters, achieving comparable results with a different technical approach. These techniques address the challenge of massive LLMs, where retraining or fine-tuning for tenant-specific models is impractical. RAG and LoRA adapters enable the secure use of shared LLMs across multiple tenants while ensuring tenant-specific results.

In addition to these use case-specific approaches to protecting data, Workday deploys key AI security practices across Illuminate:

  • Data is never shared to train third-party public models.
  • Data is always encrypted in transit and at rest.
  • Regional data compliance (data residency in training and inference) is adhered to in each region.
  • AI features can be configured to exclude specific locations or user groups.

Approaches to safely training models.

Company-Specific Models

  • Fine-tuned per company
  • One model per tenant
  • Example: journal insights

Fine-Tuning on Shared LLM

  • Shared model across tenants
  • Non-sensitive data only
  • Example: job description generation

Shared LLM Without Full Retraining

  • Tenant-specific vector databases
  • No co-mingling of data
  • Example: question and answers

Company-Specific Models

  • Fine-tuned per company
  • One model per tenant
  • Example: journal insights

Fine-Tuning on Shared LLM

  • Shared model across tenants
  • Non-sensitive data only
  • Example: job description generation

Shared LLM Without Full Retraining

  • Tenant-specific vector databases
  • No co-mingling of data
  • Example: question and answers

Workday selects the best approach to deliver high-quality results—while ensuring customer data is kept private and secure.

Responsible AI: The key to sustainable AI innovation in business.

In late 2022, AI exploded onto the consumer scene with the launch of OpenAI’s ChatGPT, quickly followed by a wave of applications capable of creating complex text, code, realistic photos, and even animated visuals from simple conversational language prompts. However, alongside these rapid advancements came concerns about bias, hallucinations (inaccurate and nonsensical outputs), and the ethical implications of increasingly powerful AI systems. The AI revolution has already brought profound changes, and its impact will only continue to grow.

For organizations adopting AI, it’s crucial to carefully consider your approach to AI, ensuring that you’re positioned for long-term success while minimizing risk.

Responsible AI: The business benefits of innovating with integrity.

We believe it’s more than just the right thing to do—it’s also the smart thing to do. Using AI responsibly makes business sense. A robust RAI program delivers numerous benefits, including:

  • Increased adoption: When employees and customers trust that AI systems are fair, unbiased, and reliable, they are more likely to embrace and use them.
  • Improved decision-making: RAI programs emphasize data quality, fairness, and transparency, leading to more accurate and reliable AI models that support better decision-making.
  • Reduced risks: By proactively addressing potential ethical and societal concerns, RAI programs help mitigate risks such as reputational damage, regulatory fines, and legal challenges.
  • Enhanced agility: A strong RAI framework provides a clear and consistent approach to AI development, enabling organizations to adapt quickly to new challenges and opportunities.

At Workday, we believe innovation doesn’t have to come at the cost of integrity. Instead, a well-designed RAI program enables businesses to understand and mitigate risk early on so they can confidently and consistently advance toward their AI goals.

The Workday responsible AI program: Building trust to create real value.

After years of creating, testing, and improving our internal approach to RAI, Workday has established a comprehensive and actionable program based on four pillars—principles, practices, people, and policy. We’ll review each in detail below.

Pillar 1: The principles and values that drive everything we do.

Workday is committed to doing right by our customers and their employees, and the broader community. Since our founding, we’ve upheld core values such as respect and appreciation for employees and customers. We believe in innovating with integrity, which led to our initial commitment to an ethical AI approach in 2019.

After establishing a dedicated RAI team in 2022, we updated our ethical AI principles to ensure clarity and alignment across the organization. In developing responsible and trustworthy AI systems, we strive to uphold these four central tenets:

Amplify human potential.

We believe AI should empower people, not replace them. Our focus is on helping customers and their employees harness the power of AI to adapt and thrive. We are developing AI solutions that reduce repetitive tasks, freeing up time for more strategic and meaningful work.

Positively impact society.

We develop AI to solve real business problems and empower people, not just for technology’s sake. As a leading provider of enterprise cloud applications for HR and finance, we avoid developing AI for intrusive productivity monitoring. Instead, we focus on safe, human-centered AI solutions aligned with our values.

Champion transparency and fairness.

We leverage a risk-based approach to measure and mitigate bias in our AI features. We also provide clear and complete documentation to our customers on how our AI solutions are built; how they work; and how they are trained, tested, and monitored.

Deliver on our commitment to data privacy and protection.

Protecting customer and user data has always been a priority, and this commitment extends to our AI technologies. Data privacy and security are built into the development and the architecture of every AI capability. Our model training considers the sensitivity of data and use cases that are determined by a robust risk management framework.

Pillar 2: The practices of responsible AI in Workday.

Our guiding principles are central to our RAI efforts, and we bring them to life through the following practices.

Human-centered:

To amplify human potential, we have practices to ensure that our AI systems always prioritize people. Our development teams follow a “human-in-the-loop” design philosophy. This means AI outputs inform—but don’t automate—important business decisions. Every AI feature is designed for inclusivity, ensuring usability by diverse users regardless of technical expertise, accessibility needs, or language. Our customers, not Workday, are in control of their data contributions. We clearly communicate how we use customer data to train AI models and obtain explicit consent. Customers have full visibility and control, enabling them to turn AI features on or off at their discretion. They also have granular control to disable AI features by location or job function.

Transparent and explainable:

To champion transparency in our product, we provide clear notifications in the user interface when AI is being leveraged. We also provide explainability within the interface, clarifying how the AI feature outputs are derived and how they should be understood in the context of the use case.

Our Illuminate fact sheets provide customers with detailed information on specific AI features, including:

  • Data inputs and outputs
  • Intended use guidance
  • Training and testing methodologies
  • Limitations

We understand that customers may want to test AI outputs locally for sensitive use cases. To support this, we offer options such as easy data export for additional testing.

We also provide customers with summarized results from holistic quality testing and reporting. For sensitive use cases, we document that the quality of data is appropriate for the intended use. Test results are documented before deployment and at scheduled intervals.

Safe and secure:

Beyond the data privacy and security built into the Workday platform, we take additional steps to ensure our AI capabilities meet the highest standards.

  • Training public models: Customer data is never shared to train third-party public models such as ChatGPT or Google Gemini. This ensures sensitive data isn’t leaked externally or to unauthorized internal personnel.
  • Customer data control: Customers have complete control over their data and how it’s used for model training. Data used for training is refreshed regularly, including when a customer opts out of contributing data for specific features.
  • Data encryption: All data processed through Illuminate features is encrypted both in transit and at rest.

Pillar 3: The people and importance of multidisciplinary collaboration.

Responsible AI is everyone’s responsibility at Workday. Our RAI program fosters interdisciplinary collaboration, requiring input and oversight from various stakeholders. We rely on a diverse group of cross-functional experts to develop and maintain our RAI governance framework. This includes participation from:

  • Engineering and product teams
  • Data science and quality control
  • Legal and compliance
  • Public policy
  • UX design
  • Belonging and diversity

The program has strong executive buy-in, with key leaders recognizing RAI as a companywide imperative.

At Workday, we’ve developed the following participation levels:

  • RAI Advisory Board: Composed of C-suite executives from across the organization, this board guides and supports the dedicated RAI team.
  • RAI team: Led by our Chief RAI Officer Dr. Kelly Trindel this team of scientists and AI experts dedicates 100% of its time to building and maturing RAI governance.
  • RAI Champions Network: This network of experts embedded within key product and technology teams across the company is passionate about developing responsible, ethical AI solutions.

Pillar 4: The policy of proactive engagement.

Workday actively contributes to responsible AI policy. We engage with stakeholders throughout the policy development process to help our customers adapt to new regulations and shape policies that promote trust, innovation, and ethical AI practices.

For example, when the EU AI Act came into force in August 2024, Workday customers were already prepared. Because the Act’s requirements are being phased in gradually, we proactively aligned our RAI practices with the Act by working closely with officials during its development. This proactive approach minimized disruption and allowed them to stay focused on their core business activities.

Mitigating AI risk: A Workday framework.

A risk-based approach is the foundation of our RAI program. We use a risk-assessment tool to categorize AI features into five tiers: four levels of acceptable risk and one unacceptable-risk tier. Each tier has specific requirements for documentation, guidelines, and mitigation strategies.

The risk evaluation tool identifies and documents risk levels for all new AI products and features, paving the way for appropriate mitigation early in the development process. The analysis is quick, efficient, and considers factors such as:

  • The context of the AI’s use
  • The AI’s technical design
  • Potential impact on individuals
  • Surveillance and privacy concerns

Figure 3-4. RAI risk evaluation factors.

Without this risk-based approach, we might hinder innovation by placing unnecessary burdens on low-risk AI or overlook potential issues in high-risk AI. This framework allows us to allocate resources effectively and promote safe AI innovation.

Conclusion.

Through a thoughtful, proactive, and responsible approach to AI, Workday makes AI available for adoption and use throughout your key business processes and daily workstreams. Our unique Workday Illuminate architecture, Workday Agent System of Record, and our responsible AI efforts work together to keep your data secure and lower risk while unlocking even greater benefits to maximize your ROI. Workday has a differentiated AI foundation that is prepared to take advantage of the latest innovations and deliver real business value to our customers quickly—no matter where the technology is headed.

The next chapter explores our ongoing commitment to an exceptional user experience.

+1-925-951-9000 +1-877-WORKDAY (+1-877-967-5329) Fax: +1-925-951-9001 workday.com

© 2025 Workday, Inc. All rights reserved. WORKDAY and the Workday logos are trademarks of Workday, Inc. registered in the United States and elsewhere. All other brand and product names are trademarks of their respective holders.

20250630-tech-strategy-ebook-content-refresh-and-foleon-migration-enus