[go: up one dir, main page]

[NEW] Code Review Agent: Improve your code’s quality, security, and compliance
Home / Blog /
Getting your org to “Yes!” with AI adoption
//

Getting your org to “Yes!” with AI adoption

//
Michelle Gienow /
5 minutes /
June 19, 2024

People often assume that, because it’s so radically new and different, adopting an AI tool requires a completely different evaluation process from all the other tools in your software development life cycle. 

It doesn’t. 

Assessing and evaluating an AI tool follows the same well-known evaluation process as any other tool or technology you might seek to adopt into your organization. However, one thing is different: an AI tool requires involving a more diverse set of stakeholders — and involving them more deeply — than you’ve ever had to in previous technical initiatives.

If you’re the one tasked with bringing an AI code assistant into your organization and your software development process, you probably just want to get started trying out the tech and seeing what your developers can do with it. Involving anyone else at this point feels like it would just slow down the technical selection process — and, were this a traditional technical evaluation, it would.

With AI, though, there are teams and individuals outside the usual technical choosers and users that also have quite a large stake in adopting such a new and complex technology into your organization. If you don’t identify and involve these other stakeholders early on, they might emerge late in the adoption process and challenge or block the work you’ve already done. With a little bit of advanced planning, you can avoid these hiccups. 

AI rendering of stakeholders in conversation

Who are the stakeholders involved in buying AI tools?

A stakeholder is defined as “any group or individual within an organization who (1) is affected by or (2) able to affect the achievement of an organization’s objectives.” 

In a technical initiative, we’re used to thinking only in terms of choosers (those who get to decide a tool’s fit to things like architecture and approach) and users (those who will live with a tool every day). These stakeholders are usually the primary people involved in a hands-on relationship with the evaluating, testing, and deploying of the software development tools in question. In an AI evaluation, however, they’re joined by equally active — and influential — stakeholders who will also expect to play a large part in opening the door to AI in your organization. But how do you figure out who these stakeholders are?

The first step is to identify everyone involved in making this decision or having a voice in the process. For many companies, these will include: 

InfoSec, legal, and compliance 

Leaders and members of these teams all expect — and quite rightly so — to weigh in on what fits their standards, how data will be shared and what will be generated, and what’s acceptable in terms of risk. Because AI is so new, your company will expect to construct guardrails around how AI-powered tool(s) will be integrated into your work. Legal, IT, and compliance teams are your collaborators in creating standards for AI that embrace innovation without putting the business at risk. 

Executives, IT decision-makers, and the C-suite

A possibly surprising stakeholder cohort comes from the upper echelon of your company’s leadership. They’ll want a say in this project because adopting AI is currently a strategic business initiative. This isn’t just your VP of Engineering choosing a better git repo or a security scanner; this is a significant undertaking, similar to undergoing a cloud native transformation, and its implications will touch almost every part of the organization — which is why it will happen only if all your stakeholders agree. 

What matters to these different stakeholders?

Next, identify the specific issues that these various stakeholders care about. What information do they need to understand to make a decision and address any concerns? This matters even if you have full authority to investigate AI for adoption, as there are typically broader implications when adopting generative AI. 

What do these additional decision-makers care about? Do your homework before approaching them.

  • Legal cares about compliance with existing governance and policies, staying aligned with any applicable legal or regulatory requirements, any potential for AI-related intellectual property issues, and protecting your organization’s IP.
  • IT leadership cares about tool integration and compatibility. They’ll also care about system maintenance, depending upon your implementation. If it’s bare metal, it’ll be GPUs and networking; if it’s VPC, then they’ll be concerned with the right instance with your cloud provider and that associated cost.
  • InfoSec cares about data privacy and security, including how the AI assistant handles sensitive data, access control, and authentication. They’ll specifically care about protecting your IP, and may require private deployments or even fully air-gapped environments (depending upon the use case).
  • Executive leadership cares about the company’s overall operations and bottom line. They want to see that AI tools will produce a favorable improvement to developer productivity, reduce technical debt, and accelerate time to market. They also care about flexibility and avoiding vendor lock-in when possible, whether this AI tool can scale and adapt to the organization’s changing needs, and how the choice can support future projects and evolving business requirements.

How to get started engaging stakeholders

Go to each of these stakeholders as early in your evaluation as possible, explain the initiative, and understand their expectations for any evaluation and rollout. 

Doing this right won’t be a small effort. But, as many teams trying to convince their company to trust in adopting AI tools have found out, you’re going to have these conversations with every stakeholder at some point in the adoption process. Getting started early reduces the overall effort. Getting in front of the questions will help you avoid freezing an otherwise successful and exciting AI implementation while the Legal, InfoSec, and IT teams each have to spend time getting up to speed on these efforts.

Getting started with stakeholders begins with the core technical project group — those tasked with the research, testing, and usage of an AI-assisted software development tool (or tools). Your very first job is to define the goal of the project: what positive outcomes do you want to achieve by deploying these tools, and how do you define success? Next, sit down with your org chart and identify any stakeholders outside of the technical team who will also have a say. Third, identify areas of concern each given stakeholder/team is likely to have around formally bringing AI into the organization so you can be proactive with helpful information as you reach out to them. (But never be afraid to answer a query with, “I don’t know; let me find out and get back to you with an answer.”  No one is going to judge you for not having all the answers to a rapidly evolving new technology.)

Now it’s time to reach out and build stakeholder involvement team by team, listening to needs with an open mind. Chances are that many, if not all, of them are going to be excited but also a bit anxious about using AI in the business. Your ability to address many questions and concerns up front will build trust and confidence — which in turn will speed up the buy-in and approval process. 

Getting to “yes”

AI tools are almost certainly already in use within your company, and generally without many leaders even knowing it — another form of shadow IT. Shadow AI, however, means unsanctioned tools could be storing and training on your valuable IP, infecting every part of the org, from Marketing to Finance to Operations. This is your Legal, Compliance, and Security teams’ biggest fear.

AI adoption in your company is going to happen. It’s already happening. But getting it right and with minimal friction is key to getting the most out of AI.

Everyone who can influence or contribute to the outcome of the project, whether in the technical or the business branches of the org, needs to be identified, heard, and balanced (so you don’t find out the hard way that, say, Legal has major concerns and is freezing your adoption initiative indefinitely). By involving these stakeholders and addressing their specific needs, you’re well equipped to choose an AI code assistant that not only meets your technical requirements but also aligns with broader organizational concerns and business goals (and lets you get back to building software). 

If you’re interested in a more detailed guide to help choose the right AI software development tools for your company, we’ve got just the thing: download our new white paper, “AI Code Assistant Buyer’s Guide.” You’ll learn what to look for in an AI code assistant, what outcomes to expect, 7 evaluation criteria to consider, and much more — all backed by real-world examples and expert insights.