Using long-token contexts to quality check an entire API doc set

One of the advantages of recent Gen AI updates is the massive token input context. When you can pass in an entire set of documentation as an input, you have a much stronger possibility for powerful prompts. In these prompts, the reference docs can serve as a key source of truth. User guide content and drift out of date, but a freshly generated reference doc should be accurate to the code base, for the most part. From this source of truth, you can do all sorts of things, such as identify outdated content in the user guide, see what's new between outputs, get links in your release notes, and more. In this article, I share 8 quality control prompts you can use when passing in your entire reference docs.

Seeing invisible details and avoiding predictable, conditioned thought (ZAMM series)

In this essay, I explore the idea of seeing the unseen aspects of things. I discuss several authors on this topic: Rob Walker, an art critic; Viktor Shklovsky, Russian formalist literary critic; and Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance. My main point is to avoid predictable, conditioned thought by pausing to ask questions about our experiences and the environment around us. In a world where prediction algorithms constantly direct us toward the most likely next word, pushing back and embracing creative ways of seeing and interpreting the world can inject new ideas and perspectives in ways that rejuvenate us.

Automate links in your release notes using AI (prompt engineering)

My previous prompt engineering technique focused on creating release notes using file diffs. In this article, I explain how to use AI to link all the code elements, often referenced in release notes and other documentation, to their appropriate reference page. The technique basically involves providing your reference documentation in HTML form along with instructions to link all the code elements in Markdown syntax.

Links from around the web -- June 10, 2024
Links from around the web -- June 10, 2024

The following are interesting reads or listens related to tech comm. Topics include podcasts on RAG techniques for AI content development, OpenAPI reference guides, dead-end counterarguments, Lavacon in Portland, and AI cautiousness.

[Podcast] Uncovering and communicating the value of your tech comm teams' work, with Keren Brown
[Podcast] Uncovering and communicating the value of your tech comm teams' work, with Keren Brown

In this podcast episode, I talk with Keren Brown, VP of Marketing and Value at Zoomin Software, about strategies for technical writers to demonstrate their value within their organizations, especially in light of recent layoffs in the tech industry. We discuss aligning documentation work with high-priority initiatives, quantifying the impact of technical writing, and making this work visible to executive leaders. Keren also shares insights on the changing landscape of technical writing skills in the age of AI and the role of translation in modern documentation workflows. Overall, this podcast will show you how to establish yourself as a highly valuable resource within your company.

[Prompt engineering series] Using file diffs for better release notes in reference docs

You can use AI prompts when creating biweekly release notes for APIs by leveraging file diffs from regenerated reference documentation. The file diffs from version control tools provide a reliable, precise information source about what's changed in the release. I also include a detailed prompt for using AI to analyze file diffs and streamline the release note creation process.

Updates two years later on my smartphone experiment

Two years ago, I started an experiment to reduce my smartphone usage. While I've reverted to using my smartphone regularly, I've learned a lot along the way. I've realized there's an inverse relationship between book reading and phone usage, and I've made a conscious effort to prioritize reading more books, especially with the reinforcement of book clubs. I've also accepted that while smartphones are necessary, it's the constant notifications that contribute to anxiety. By removing most social media and news apps, I've switched to a pull model for information, reducing my stress levels.

Thoughts on Docs as code being a broken promise

In response to Sarah Moir's post, 'Docs as code is a broken promise', I agree that Git's complexity can be a major hurdle for writers, especially when generating diffs for review. Simpler Git workflows and tools with visual interfaces for merging and diffs are essential for making the process smoother. Despite its challenges, I still prefer docs-as-code over proprietary tools because of its advantages, like using Markdown and generating diffs for review.

Get Better at Using Prompts With Deliberate Practice: One technical writer's little experiment — guest post by Diana Cheung
Get Better at Using Prompts With Deliberate Practice: One technical writer's little experiment — guest post by Diana Cheung

In this guest post, Diana Cheung explores how to learn AI by using deliberate practice to enhance her prompting skills. As a deliberate practice effort, she emphasizes intentional, systematic practice rather than mindless repetition, similar to how one would learn coding or other skills. In this post, she shares her attempts at using Claude.ai to work through editorial improvements to a GitHub project's API documentation.

Prompt engineering series: Creating scripts to automate doc build processes

Documentation scripts perform processes such as building reference documentation or doing other repeated processes with docs. This tutorial builds on the conceptual content in Use cases for AI: Develop build and publishing scripts. In this tutorial, I get more specific with strategies and techniques for prompts, walking through a prompt to build a script for generating reference docs.

What should your documentation metrics look like? Q&A with Zoomin about their 2024 Technical Content Benchmark Report
What should your documentation metrics look like? Q&A with Zoomin about their 2024 Technical Content Benchmark Report

Zoomin recently released a Technical Content Benchmark Report for 2024. This report explains the company's second benchmark report on documentation metrics, which analyzes data from 97.6 million user sessions across 136 countries. The report provides insights into average metrics like page views, bounce rates, time on page, GPT search usage, and more. In this Q&A with Rita Khait from Zoomin, she discusses how to interpret and use these benchmarks to set goals, improve content findability and performance, and demonstrate documentation's value to stakeholders and the business.

AI is accelerating my technical writing output, and other observations
AI is accelerating my technical writing output, and other observations

At the start of the year, I wrote a trends post and noted uncertainty about the directions AI will take tech writers this year. (See My 2024 technical writing trends and predictions.) Now that we're into April, I have a better sense of how things are going and wanted to provide an update here. My main observation is that AI is accelerating my technical writing output, making me about twice as productive as before. Also noted in this post: prompt engineering is a non-obvious skill that many tech writers still struggle with, even though documentation is more within AI's sights than creative content.

Upcoming conference: AI the API docs
Upcoming conference: AI the API docs

There's an upcoming conference called AI The Docs 2024: API Documentation and AI Best Practices, held April 3, 2024 online. The conference is put on by the same API the Docs group / Pronovix that holds other online conferences and events. I'll be one of the speakers. I'm planning to talk about prompt engineering.

Prompt engineering series: Gathering source input

A powerful way to reduce hallucination with AI tools is to supply an abundance of source material for the AI to draw upon. This article explores best practices for gathering and organizing that source material. I argue that you should be selective in what you include, preferring quality over quantity. Organize basic information first, then more advanced details. Include the reference output and any meeting notes.

Prompt engineering series: Error checking the output

In this article, I describe a method to use the AI to fact-check its own output against source materials. The better your source materials, the more likely you'll be able to identify errors. However, even with this method, human review by SMEs (subject matter experts) is still necessary. Unfortunately, getting SMEs to thoroughly review drafts can be difficult. Additionally, SMEs have such specialized knowledge, it's easy for one SME to LGTM a doc without having 100% certainty about each part.