Businesses everywhere are betting on a new wave of autonomous AI agents: technologies built to take on decisions and tasks once reserved for people. The promise is sweeping: agents will be able to run work in the background and carry out tasks directly, transforming how organizations operate. According to The Work Innovation Lab, 76% of workers expect to hand off nearly half their tasks to these agents within three years.
But the reality looks very different. Even the most advanced agents from the biggest tech players fail at 70% of real-world office tasks.[1] The problem is the way agents work. Built to operate on their own, they miss the mark when work requires judgment, context, or collaboration. For agents to actually deliver, they need to stop acting like isolated bots and start working like true teammates.
Learn how AI Teammates deliver outcomes with the right context, checkpoints, and controls.
That’s why we built AI Teammates: collaborative AI agents designed for teams. These agents can take on complex work, collaborate with teams, and deliver outcomes—just like a real teammate would.
The process is simple:
Name your AI Teammate and define the role it will play on your team.
Grant it access to the projects, portfolios, goals, and data it needs to do the job.
Assign work, give feedback, and refine results together.
Here are three marketing roles we’ve already set up with AI Teammates.
Before MUSE, the process for creating Help Center articles was manual and repetitive. Writers spent the bulk of their time digging through competitor content, rewriting technical notes into plain language, and checking drafts for brand consistency. That left less time for higher-value work like sharpening key messages, shaping narratives, and setting content strategy.
With MUSE, Asana’s AI Teammate that drafts Help Center articles, teams start with a draft that’s accurate, structured, and aligned to Asana’s voice.
MUSE’s core responsibilities include:
Generating initial drafts tailored to the right audience persona
Ensuring consistency with Asana’s brand voice and style guide
Optimizing articles for searchability and engagement
Formatting pieces with clear sections, headers, and visual elements
By handling the research, formatting, and first-pass writing, MUSE allows the content team to focus on higher-value work, like setting the brief, refining the message, and raising the bar for quality.
“MUSE helps us decrease manual effort when it comes to writing and publishing Help Center articles,” said content writer Vania. “It lets us move faster.”
Here’s how MUSE creates a Help Center article:
To kick off work, an Asana content writer creates and assigns MUSE an Asana task with clear direction and the context it needs to get started, such as:
A clear article topic
Specific sections to include in the article
Direction for formatting the piece
Links to related documentation, such as release notes, internal product guides, or reference articles
This upfront detail ensures MUSE has the full context before it begins drafting.
Once the task is assigned, MUSE reviews the task description and automatically drafts subtasks that map the work from start to finish. The goal is to turn the broad request into a clear, step-by-step workflow that takes the task from raw documentation to a finished Help Center article, just like a human teammate might set up.
These subtasks might include:
Review competitor resources for gaps and opportunities
Review formatting and style guidelines
Draft initial article content
Identify which product visuals to include
Produce and link the final draft in the parent task
By laying out the plan upfront, MUSE makes it easy for everyone to see what’s happening and jump in with feedback at any stage.
With the workflow mapped out, MUSE moves on to research, gathering the details needed to draft a complete article. That includes:
Reviewing internal documentation
Analyzing published Help Center articles to mirror layout and tone
Learning how the feature works and how users apply it in practice
Reviewing competitor content to surface gaps and map differentiators
Determining the target persona so the article is written at the right level
Using the context gathered, MUSE comments its plan in the parent task and tags the original requestor. The proposed plan typically includes:
The article outline and key sections
A project timeline
An invitation for feedback before drafting begins
Once the requestor signs off on the plan, MUSE begins the initial draft, incorporating Asana’s style guidelines and best practices, including:
Adapting technical content into customer-friendly explanations
Translating features into benefit-focused language
Formatting with consistent headers, sections, and navigation
Adding SEO optimization such as keywords and meta descriptions
The result is a draft that’s polished enough for review.
When the draft is ready, MUSE links it in the parent task and tags reviewers for feedback. The reviewers leave comments directly in the document or in Asana and MUSE processes the feedback, updates the article, and cycles through revisions until all reviewers sign off.
After revisions, MUSE finalizes the article by closing out its subtasks, producing the final document, and linking it in the parent task. From there, the content team can edit and refine the content further if needed.
MUSE wasn’t built as a general-purpose writer; it was set up specifically as a “Help Center writer.” Building your teammate to support key roles and defining their job gives them clear responsibilities and measurable outcomes.
Faster first drafts: Writers start from a solid, on-brand draft instead of a blank page, shortening turnaround time.
More time for high-value work: The team can focus on creative direction, key messages, and narrative quality, while MUSE handles research, formatting, and initial drafting.
Smoother content production over time: MUSE spots recurring feedback and carries those learnings into the next draft.
Before SCOUT, auditing Asana.com for SEO opportunities and schema gaps was manual and resource-heavy. SEO managers had to crawl sitemaps, trace navigation paths, review page types, and determine which structured data to implement. Creating SEO-optimized page content was equally time-intensive. For the SEO team, that meant less time for strategy and more time spent on repetitive tasks.
With SCOUT, an AI Teammate that supports SEO content creation and schema markup, teams can quickly surface opportunities, identify missing structured data, and quickly generate SEO-optimized content.
SCOUT’s core responsibilities include:
Auditing pages for structured data gaps and recommending schema types
Generating valid JSON-LD schema templates
Helping improve visibility and discoverability across page types
Creating SEO-optimized, LLM-friendly content
As one teammate explained, “SCOUT will help us when we’re low on writing or research resources, and it will take on that work in an LLM-optimized way,” said senior SEO manager Calvin. “That’s where SCOUT is the strongest: getting the research in place and helping us execute.”
Here’s how SCOUT runs a schema audit:
To kick off work, an SEO manager creates and assigns SCOUT a task in Asana with a clear directive. For example, the request might be for SCOUT to grab all the unique page types on Asana.com and catalog them by first-level subfolder.
To make sure SCOUT has everything it needs to understand the task, the teammate might add additional context, like a link to the sitemap, navigation documentation, and past SEO audits or research.
Once SCOUT is assigned the task, it reviews the request and breaks it into subtasks that map the workflow from start to finish. For a schema audit, the subtasks might include:
Research Asana.com site structure and navigation
Extract and catalog first-level subfolders
Compile a comprehensive page type list
Verify and validate results
Deliver final results and methodology in the parent task
SCOUT then posts a comment in the parent task, outlining how it will identify all first-level subfolders and tagging the manager for visibility into the approach.
Once the SEO manager approves the proposed plan, SCOUT moves forward, researching, extracting, cataloging, and validating results. At each stage, the AI Teammate posts a comment back to the subtask so the manager can track progress.
Once all subtasks are complete, SCOUT posts a final comment in the parent task summarizing the audit. For the schema audit, this might include:
The full catalog of first-level subfolders
The methodology used to gather results
Key categories discovered across the site
From here, the SEO team can refine schema strategy, implement structured data, or use the catalog as a foundation for SEO-optimized content creation.
SCOUT works best when its tasks include sitemaps, past audits, and navigation documentation. The more background you give your AI Teammate, the stronger and more usable its outputs will be. That includes your company’s best practices, clear specs around responsibilities, reference documents, and access to the projects and goals it supports.
Faster audits: SCOUT handles the heavy lifting of cataloging site structures and page types, saving hours of manual work.
More time for strategy: SEO managers can focus on content priorities and strategy instead of repetitive data gathering.
Built-in execution support: With schema-ready outputs and SEO content capabilities, SCOUT doesn’t just identify opportunities, it helps turn them into action.
Faster SEO-optimized content creation: SCOUT drafts copy aligned to keywords and structure, giving writers a head start on optimized content.
Before DASH, sprint coordination was full of manual updates and repetitive reporting. Teammates had to take time out of their day to share status, collect progress, and prepare retrospectives. Managers often spent hours pulling information together just to get a clear view of how the sprint was going.
With DASH, an AI Teammate that acts as a sprint coordinator, those updates are automated. DASH creates daily summaries, surfaces mid-sprint reporting, and compiles retrospective insights. Instead of spending meetings gathering data, teams can use that time to discuss results and next steps. For digital platform manager Bill, introducing DASH into the engineering workflow has helped “alleviate the manual work that has to be done around a regular sprint planning cycle.”
DASH’s core responsibilities include:
Producing daily summaries of team activity within a sprint
Compiling mid-sprint status updates and trending insights
Identifying issues, blockers, and successes across projects
Generating retrospective summaries with actionable recommendations
Here’s how DASH runs a sprint retrospective:
An engineering teammate assigns DASH a task in Asana with a clear request, such as creating a retrospective for a recent project. The teammate tags DASH with a prompt that explains what to include—like what went well and what challenges the team faced—and links the relevant Asana project so DASH knows exactly where to pull information from.
After reviewing the task, DASH responds in the parent task with its go-forward plan. For a retrospective, this might include:
Conducting a comprehensive analysis of project tasks, comments, and interactions
Identifying recurring issues, challenges, and successful practices
Compiling findings into a structured summary with actionable insights
DASH also creates subtasks to manage its own work and tags the original requestor, confirming the plan and letting them know it will share updates once the retrospective summary is ready for review.
With the plan in place, DASH begins the retrospective much like a human teammate would: gathering context from the linked project and pulling insights based on the provided instructions. This might include:
Reviewing project tasks, comments, and team interactions to identify blockers or delays
Identifying challenges and successful practices across the project lifecycle
Flagging conflicts or recurring issues that surfaced during the sprint
Pulling specific examples that illustrate key successes and areas of improvement
Once its analysis is complete, DASH posts a summary in the parent task, organizing the retrospective into clear sections for review, including:
An executive summary
Key issues and challenges
Successful practices and team wins
Actionable recommendations for future projects
An overall project assessment
Based on DASH’s summary, the team can use the retrospective in their meetings or follow up for further refinements. For example, if they want more specific guidance, they can ask DASH to break its recommendations into concrete steps the team can implement to improve workflows. “It’s like a coworker—the more information and instructions you give it, the better the outputs will be,” said Bill.
Each time teammates asked questions and refined DASH’s retrospective summaries, its outputs got sharper. The same principle applies to any AI Teammate. The more you refine, the better it gets.
Less manual reporting: Instead of every teammate spending time on daily updates, DASH compiles daily and mid-sprint summaries automatically.
Deeper insights: By analyzing tasks and comments, DASH surfaces blockers and trends that might otherwise go unnoticed.
More effective retrospectives: Teams walk into retros with structured insights already in hand, so meeting time can focus on discussion and decisions.
"It does function very much like a remote employee,” said Bill. “And it can help take on a lot of the manual work team members do.”
These use cases are only the beginning. Across functions like IT, product and engineering, and operations and PMO, we’re finding clear areas where AI Teammates can help accelerate work. Whether acting as a campaign strategist that drafts briefs and tracks deliverables, an IT ticketing specialist that categorizes and routes requests, or launch navigator flagging risks across cross-functional efforts, the throughline is clear: AI Teammates aren't here to take roles away from humans; they're here to automate the manual work, elevating the capacity humans have to take on strategic work.
When paired with AI Studio, AI Teammates provide a complete AI solution for work. While AI Studio handles routine, high-volume work, AI Teammates focus on the complex, collaborative work that requires context and judgment. Together they free up teams to focus on what matters most: strategic initiatives and creative problem-solving that drive outcomes.
AI agents may be the next big shift in how we work, but high-impact work has always taken a team. With AI Teammates, that team just got stronger.
Learn how AI Teammates deliver outcomes with the right context, checkpoints, and controls.
[1] *TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks, Carnegie Mellon University, 10 Sep 2025.