Responsible use of AI

At TeamRetro, we believe that artificial intelligence should enhance teamwork, decision-making, and collaboration while keeping people, ethics, and trust at the heart of every interaction. Our commitment to responsible AI ensures that the tools we build are transparent, fair, and aligned with these values.

TeamRetro does not use your data to build or train AI models.

All content generated using TeamRetro's AI tools is subject to our platform's overall security, privacy, and terms of use.

TeamRetro AI capabilities

TeamRetro integrates generative AI into various features to enhance retrospectives and health checks. These AI features streamline your experience by generating templates, health models, icebreakers, suggesting groupings, actions, providing summaries and insights, and tracking team sentiment and topics over time.

Capability Provider Model
Retrospective template, icebreaker question, health model generation Open AI ChatGPT 4o
Meeting summarization AWS Bedrock Anthropic Claude Sonnet 4.x
Meeting suggested titles AWS Bedrock Anthropic Claude Haiku 4.x
Insights topic, sentiment, keyword annotations. Theme identification. AWS Bedrock Anthropic Claude Sonnet 4.x
Anthropic Claude Haiku 4.x
Automated grouping, group title suggestions AWS Bedrock Anthropic Claude Sonnet 4.x
Anthropic Claude Haiku 4.x
Action suggestions AWS Bedrock Anthropic Claude Sonnet 4.x

Learn more about our AI-powered features. Learn more about our sub-processors.

Our principles

We are committed to responsible AI governance. Our AI use must:

  • Be fair, transparent, and accountable.
  • Avoid bias and discrimination.
  • Use data ethically and lawfully, with appropriate consent.
  • Be explainable to users, with clear documentation of decision-making processes.
  • Comply with our SOC 2 Type 2, GDPR and other privacy rules.

Our commitment to safety

Principles are important, but so are practices. To make sure our AI stays aligned with your needs, we utilize AWS Bedrock Guardrails to actively monitor, moderate, and restrict AI use where appropriate. These include, but are not limited to:

  • Input prompts and user content are monitored by TeamRetro to reduce the risk of inappropriate or unsafe AI results.
  • Output from AI features is automatically moderated to prevent unwanted or potentially harmful responses.
  • AI features in TeamRetro are configured to avoid generating content related to:
    • Medical information or health advice.
    • Self-harm or mental health diagnoses.
    • Sexually explicit material.
    • Political topics, including commentary on elections or political figures.

Reporting and feedback

Responsible AI is not a one-time promise; it's an ongoing journey. We continuously review, refine, and update our practices as technology evolves, ensuring our solutions remain aligned with the highest ethical standards.

If you encounter inappropriate, inaccurate, or unexpected AI content, or believe a prompt or output was incorrectly flagged, please contact our support team so we can investigate at info@teamretro.com.

Frequently asked questions

Only when you choose to use features like summarization or annotation. Even then, the data is processed briefly and never stored or used to train the AI.

Yes, account administrators can enable/disable AI features for their entire accounts via settings. Facilitators can further disable AI features in individual meetings.

We might. But when we do, you'll always have transparency, control, and confidence that your data is treated with care.