Our Commitment to Responsible AI

While WorkBoard and many other organizations have used AI, machine learning, and natural language processing to improve stakeholder experiences, the advent of generative AI presents tremendous new opportunities and creates new concerns. While generative AI is still in the early stages of adoption, it is already making a real difference in our ability to gather, synthesize, and author information more quickly – activities at the center of knowledge work.

To drive the most beneficial outcomes for our customers, our use of AI must be trustworthy and ethical, and as regulations emerge, compliant.

The pillars of our commitment to responsible AI are:

We accelerate teams, not replace them
WorkBoard’s use of AI generates drafts of OKRs, action plans, scorecards, and other strategy execution artifacts that enable the user to make their own decisions faster. The Co-Author and AI are collaborators with the user – visibly and transparently – rather than obviating the user’s ideas, input, or insight. Users can choose to accept, modify or reject the draft or briefs that our Co-Author generates. This "human in the loop” use of AI honors the agency of our users while providing benefits that neither AI nor a human could achieve alone.

We are transparent
We make clear documentation on our architecture and software available to our customers, including how we use AI. This includes explaining how our learning models are built, trained, and tested. We value inclusion and fairness, and our governance process monitors how we use AI for unintended consequences. As we continue to both innovate and learn, we will maintain our deep commitment and controls for explainability.

Your data is private by design
Data privacy and confidentiality are the foundations of trust for any platform, and our privacy and information security policies apply to our use of AI technologies. We grant customers control over their data's usage in our AI solutions. Privacy-by-design is a first principle of all of our development practices, and our use of AI and learning models is no exception. WorkBoard enables companies to harness the intelligence of their own strategy execution data managed in our platform while benefiting from the power of a domain-specific large language model to generate strong suggestions faster – without worrying that their data ever lands in the public domain, on ChatGPT, or in a competitors’ hands.

Generative AI Data Security FAQ

How does the OKR functionality in WorkBoard utilize generative AI?

WorkBoard leverages Azure Managed Open AI Service to provide intelligent suggestions and prompts for generating OKRs based on the data available in the platform.

What steps are taken to ensure the security and privacy of our OKR data when using this functionality?

At WorkBoard, we take the security and privacy of your data seriously. We follow industry best practices and employ robust security measures to protect your OKR data. All communication between WorkBoard and Azure Open AI is encrypted, and access to your data is strictly controlled and limited to authorized personnel.

How is the OKR data transmitted to Azure Open AI, and what measures are in place to secure this data during transmission?

The OKR data is securely transmitted to Azure Open AI using encrypted channels and industry-standard security protocols. We adhere to secure data transmission practices to ensure the confidentiality and integrity of your OKR data.

Are there any measures in place to prevent unauthorized access or misuse of our OKR data within Azure Open AI?

Yes, Azure Open AI maintains a robust security infrastructure to prevent unauthorized access or misuse of customer data. They have implemented strict access controls, monitoring systems, and auditing mechanisms to ensure the protection of your OKR data.

What level of control do we have over our OKR data?

You retain full ownership and control over your OKR data. Azure Open AI does not use or retain your data beyond the scope of the OKR generation process. WorkBoard does not share your data with any third parties without your explicit consent.

Are there any data anonymization techniques used when transmitting our OKR data to Azure Open AI?

Your data is encrypted in transit. Personally identifiable information (PII) or any sensitive information is anonymized or tokenized before being sent to Azure Open AI, ensuring the privacy and confidentiality of your data.

Can we trust the suggestions and prompts generated to maintain the confidentiality of our OKR data?

Yes, the WorkBoard Co-Author is designed to provide high-quality suggestions while maintaining the confidentiality of your OKR data. The model focuses on generating relevant and useful prompts without retaining or disclosing any sensitive information.

Does Azure Open AI have any data retention policies regarding our OKR data?

Azure Open AI does not retain your OKR data beyond the scope of the OKR generation process. Once the suggestions are generated, the data is discarded, ensuring that your data remains secure and private.

Are there any additional security certifications or compliance standards that Azure Open AI adheres to?

Azure Open AI maintains various security certifications and compliance standards, including ISO 27001, SOC 2 Type II, HIPAA, and GDPR. These certifications ensure that their infrastructure and practices meet the highest security and privacy standards.

For more information on Azure OpenAI, visit Azure Open AI FAQ.

Ready to execute your strategy at scale?