Podcast: Play in new window | Download
Most conversations about AI still focus on prompts and chatbots, but the next shift is already here. Leading voice on AI, Mike Kaput, breaks down what agentic AI actually is, how tools like Claude Cowork and Claude Code are changing day-to-day work, and what small teams should understand before handing more tasks over to AI. It’s a practical conversation for marketers and business owners who want to move beyond experimentation and start thinking more strategically about what AI can really do.
Your AI Is Still Just Answering Questions. Here’s What It Could Be Doing Instead.
There’s a version of AI use that most of us are stuck in. You open a chat window, type a question or a request, read the response, maybe tweak a few things, and then close the tab. Repeat 47 times a day. Very useful, sure. But you’re still doing most of the work.
What Mike Kaput has been building looks a lot different.
Mike is the Chief Content Officer at Marketing AI Institute, co-author of Marketing Artificial Intelligence, and co-host of the Artificial Intelligence Show. He joined me on the Agents of Change Podcast to talk about agentic AI: what it actually means, how he uses it, and why the learning curve is shorter than you think.
What “Agentic AI” Actually Means
The phrase “agentic AI” gets tossed around a lot right now, and the definitions vary wildly depending on who’s talking. Mike cuts through it with a practical framing: an AI agent is a system that can take actions autonomously. Not just respond to prompts, but go do things.
“When I think of an AI agent, I’m thinking of an AI system or tool that can take actions autonomously,” Mike said. Deep research in ChatGPT or Gemini is a simple example — you give it a task, it goes off and executes a research plan. That’s agentic behavior. But it gets more powerful from there.
Tools like Claude Code and Claude Cowork (both from Anthropic, the company behind Claude) can access your files, create and edit documents, write and run code, and execute multi-step workflows, all based on your instructions and the permissions you grant. You’re not chatting back and forth. You’re delegating.
That difference matters a lot.
Claude Code Isn’t a Coding Tool (Mostly)
Here’s the part Mike admits he got wrong for a full year: he assumed Claude Code was only for software developers. The name doesn’t help. He ignored it entirely while he got more sophisticated with GPTs and custom prompts.
Then he read a post that reframed it: Claude Code isn’t just for programmers. Think of it as a general-purpose AI agent.
That unlocked everything.
Claude Code can read your files, update its own instructions based on your feedback, produce formatted deliverables, and chain together complex tasks that would normally require you to bounce between a dozen tabs copying and pasting. “I have tons of GPTs and Gems,” Mike said, “but I find myself in Minority Report with all these browser windows, chatting between stuff, copying and pasting all the time. Claude Code cuts a lot of that out.”
If the terminal window sounds intimidating (it does to most people, and that’s fine), Claude also offers Cowork, the same agentic capabilities wrapped in a non-technical interface inside the Claude desktop app. You get a clean tab, you give it commands, you point it at a folder, and it gets to work. No terminal required.
The “Skills” System: Your AI’s Instruction Manual
One of the most practical concepts Mike shared is the idea of building “skills”, markdown documents that tell Claude Code how to handle a specific repeatable task. It’s a hybrid between GPT custom instructions and a knowledge base. You give it format, tone, workflow steps, pointers to example files, and style guidelines. Then when you trigger that task, the agent reads the skill and executes accordingly.
Mike’s example: his team’s weekly newsletter. He feeds Claude Code the links and briefs, says “go,” and gets a clean, formatted newsletter file in a minute or two.
What makes this more powerful than a custom GPT? Iteration speed. With a GPT, you run it, read the output, go back in and update the instructions manually, run it again. With Claude Code, you tell it what you liked and what didn’t work, and it updates the skill document itself. The feedback loop collapses dramatically.
“Subject lines are super weak, and you messed up the formatting on this section,” Mike tells it. Claude Code updates the skill, runs another draft. Done.
How to Get Started Without Breaking Everything
The question I asked, and maybe the one you’re thinking, is: how do I let an AI agent near my actual work without it deleting everything I care about?
Mike’s answer: start with a sandbox.
Create a folder on your desktop. Put a few example files in there. Style guides, sample posts, maybe an HTML template if you want formatted output. Give the agent access only to that folder. Give it one concrete task. Watch how it works.
“Do not run before you can walk,” Mike said. And he’s not just being cautious; he admitted to accidentally deleting files he was working on because he had too many agent windows running at once.
The other guardrail is permission management. Commercial tools like Claude Code will ask for your permission before accessing files or running code. It won’t always be obvious what it’s asking or why (especially if you’re not a developer), but the habit of reading those prompts carefully before clicking “yes” is worth developing.
Connecting agents to your email, your CRM, or anything sensitive? Wait until you’ve watched it work in a sandboxed environment first. The upside is high. The downside is also real.
On That “95% of AI Projects Fail” Study
One of the more interesting moments in our conversation came when Mike pushed back on a widely-shared MIT study claiming that 95% of AI projects fail. The headlines ran with it. The skeptics celebrated.
Mike’s take: the study was based on around 50 qualitative survey responses. It’s not garbage research. But you cannot extrapolate from 50 responses to “95% of AI projects fail” and present it as a verdict on the technology.
My response? “Only 5% of small businesses make it past year five. Does that mean 95% of businesses are a bad idea?”
Mike put it simply: when AI implementations don’t go as planned, it’s usually a people problem and an organizational problem, not a technology problem. The technology is working. Getting teams to adopt it, integrate it thoughtfully, and iterate on it is where most of the friction actually lives.
Failed pilots are experiments with results. If that reframe doesn’t sit right, you’re probably holding AI adoption to a standard you don’t apply to anything else in your business.
Your Next Steps
If you’ve made it this far and you’re still treating AI as a question-answering machine, here’s where to start:
- Download the Claude desktop app if you haven’t already. Log in with a paid account and look for the Cowork tab.
- Create a dedicated folderon your desktop — call it “Cowork Test” or whatever helps you remember it’s a playground.
- Drop 2–3 context filesin there: a writing sample, a style guide, maybe an HTML example if you want formatted output.
- Give it one concrete task.Have it write a blog post in your style. Format a newsletter. Draft a social post. Something specific with a clear “done” state.
- Watch how it works.Pay attention to what permissions it asks for. Notice what it gets right and what it gets wrong. Iterate the skill document until it reliably hits the mark.
The goal isn’t to automate everything. It’s to identify the tasks that eat your time, have clear patterns, and could be handed off, then build the infrastructure to hand them off well.
This isn’t the end of your job. This is the beginning of doing the parts that actually need you.
Transcript from Mike Kaput’s Episode
Rich: My next guest is the Chief Content Officer at SmarterX, and a leading voice on the application of AI in business. He’s the co-author of Marketing Artificial Intelligence, and co-host of the Artificial Intelligence Show podcast.
I’m excited to dive into agentic AI today and what it means to your marketing and your business with Mike Kaput. Mike, welcome back to the show.
Mike: Rich, thanks for having me back.
Rich: Now I looked it up and I last had you on the show in November of 2023, almost two and a half years ago. As we record this, has anything changed in the world of AI since we last spoke?
Mike: Not a single thing. My gosh, it has been a whirlwind of a couple years, I feel like, or year and a half. I feel like we’ve been accelerating dramatically since two years ago. I mean, we will get into some of the developments, but you know, honestly the more things have changed, the more some things have stayed the same.
Businesses are still trying to figure out what this AI stuff is and what it can actually do for them. That’s not a bad thing, but some of the questions we were talking about back then are still relevant today, I think.
Rich: All right, we’ll dive into that. Now as I mentioned in the introduction, talking about agentic AI, and we hear terms like ‘agentic AI’ and ‘AI workflow’ all the time. Now, what does it actually mean to those of us in the trenches either running or handling the marketing for a small to medium sized business?
Mike: Yeah, that’s a good question because there’s so much being talked about with agents and agentic that there’s a lot of conflicting definitions. So I’ll just say, kind of give my perspective as one person in the trenches here.
When I think of an AI agent, I’m more thinking of an AI system or tool that can take actions autonomously. So even the chat bots you’re using every day have agentic features. We can talk about this, but a great example is deep research in ChatGPT or Gemini. You’d consider that like agentic AI, right? It’s taking a research plan and going off and doing research for you. It’s a limited piece of agentic AI in a bigger tool.
And then you have other tools that just operate totally autonomously. You give Claude Code, Claude Cowork, I think Manus is another popular tool. It will just go do things for you based on your instructions and the permissions you give it, with humans overseeing different parts of the process, intervening as needed, and perhaps giving it different permissions and guidance. But the AI is working on its own to achieve a goal.
So that’s kind of practically how I’d be thinking about this. It’s not just talking back and forth. It’s not just tool usage where ChatGPT, for instance, may go search the web or analyze data. That’s all well and good. This is much more computer use, file use, and autonomous action.
Rich: Alright. You mentioned a couple things, Claude Code and Claude Cowork. And perhaps I’m jumping around or jumping ahead a little bit, but can you explain to us what is Claude Code, what is Claude Cowork, and how does that differ from the Claude that I use every day?
Mike: Alright, so Claude Code is basically, you could probably think of it as an AI coding tool within Claude. So you can either fire this up within the Claude app or you can use it in the command line interface on your machine, the little terminal window. And it’s basically taking a Claude model and wrapping it in kind of what they would call a harness of different rules and ways of working, where this tool can then go do a range of software development functions and tasks and things for you. So it’s often optimized and used for software development and technical work, though we’ll talk about it actually can be used for a wide range of work outside of that.
Claude Cowork, on the other hand, is very similar in the sense that it is agentic, just like Claude Code. It can go do things for you. It can write code, read files, create files execute workflows autonomously. It is just wrapped in a wrapper or a package that is much more friendly to non-technical people.
So it does a lot of the same things Claude Code can do, but it’s not optimized for software development. It’s optimized more for your non-technical marketer, salesperson, businessperson, et cetera. So both of these are related tools that use the Claude model to go do work more than just a chat window when you fire up Claude AI.
Rich: Are you using both these tools right now? And if so, can you give me an example of how you might use Claude Code or how you might be using Claude Cowork?
Mike: Yeah, for sure. So just out of personal preference and because I kind of went down the rabbit hole before Claude Cowork actually came out, I use Claude Code more than Cowork.
It’s not really either/or situation. I just got in the habit of it. Cowork will do a lot of the same things that I’m talking about, so I’m much more into Claude Code, but can speak to Claude Cowork as well.
But I would say a good example of what you might use Claude Code for is I have trained Claude Code to reference a number of different files that we would call skills. Anthropic calls them skills. All they are is markdown files, just kind of a format that is pretty token efficient and easy for AI to read. You open up a markdown file, you can read it yourself. It’s just got some hashes and different asterisk and different formatting that might look a little different to a document with bolding underlying whatever.
So what you do is, a really good way to use Claude Code is to build skills to do certain things for you. A very simple example, we have a skill where Clause Code will take all the links that we’ve covered on our podcast in a week and automatically produce our newsletter. Now a human curates and writes and edits a bunch of that, but basically it learns the format of the newsletter and knows it now because we’ve worked together on building this skill. It goes and references it the moment I say, “Hey, I’ve got the newsletter, the podcast brief is on my desktop, here’s kind of what to think about. Go do it.” And it goes, thinks. Calls tools, writes code as needed to do what it does. And in a minute or two, I have a nice clean newsletter file.
So very similar, you could do the same thing in Claude in just the chat window, but it might require a lot more back and forth. It might require referencing certain docs, whereas there’s much more autonomy and leaps of logic that Claude Code is making based on my direction to actually just go execute the work.
Rich: Now, let’s pause there because I think that’s really impressive what you’ve done and pulled all these things together. Now I built custom GPTs, Gems, Claude projects, what have you, in the past, where I could have it do a series of steps. How similar or different from how you’re using Claude Code is that from doing a GPT?
Mike: It can be very similar. And if you are getting a ton of value out of your GPTs or your Gems, I’m not necessarily saying you got to go reinvent everything for Claude Code. It’s the same sort of process, right?
So Claude Code, whether it’s in a skill or just in your prompt or in your instructions, it is learning how to do something, which steps to take, and then it is going and doing them in a repeatable way.
Now, I think Claude Code has a little more autonomy and creativity in how it decides to solve problems. It can call more tools natively. It can go actually create files, amend code. For whatever reason, the way the harness and the model itself works together, it’s pretty impressive beyond just using Claude alone, for instance, or a GPT, because of the kind of intuitive leaps in logic and strategic thinking it often makes. So I think it’s more like you can almost think of it as a GPT on steroids.
And what I really like about it, especially when you think of the fact it’s referencing a skill that it’s able to update and learn from, what I found super helpful is the iteration. So if you take a GPT and you say, okay, I’ve created the instructions for this GPT, let’s say it is to write the newsletter. I run it, it writes the newsletter. I go in, read it. I’m like, yeah, most of this is pretty good, but there’s a bunch of things I’d change. I jump back in, type those in, and then say, hey, we need to update the instructions. We go update the instructions, we run it again, et cetera, et cetera, et cetera.
Whereas Claude Code, same difference but much faster, much more iteration focused, because it can just go update its own instructions. So I’ll say, “Hey Claude, the newsletter looks really good. Subject lines are super weak. You messed up the formatting on this, and I don’t like the HTML you’ve layered in for whatever section.” It will go update the skill accordingly. It’ll go update it, run another one, we’re done.
So we’re cutting out a lot of the steps, a lot of the back and forth. I don’t know about you, but I have tons of GPTs and Gems – awesome stuff – but I find myself like I’m in Minority Report with all these browser windows, chatting between stuff, copying, pasting all the time. Claude Code doesn’t get rid of all that, but because it is able to go use files and update its own documents based on your feedback, I just find it can be much faster, much more robust as building on top of the architecture you’ve already got.
Rich: And so the skills that you reference, would you say are they the same thing as knowledge docs for GPT, or are they different in some way?
Mike: Yeah, you know, they can be. I’ll be honest, I’m still feeling out kind of what is the best practices here. I can share kind of what’s worked for me, but the skill is basically just going to tell Claude Code or Claude itself, you could use this skill document with Claude if you like. It’ll tell it how to do something.
So as part of that, it will reference all the different kind of style tone, instructions, considerations. It may also point to other documents, so you may have it pointing to different knowledge-based documents. Like, “Hey, go reference the overall brand voice document”, or “Here are a bunch of examples of other newsletters and where to find them”, for instance. So that’s pretty robust, where that’s not all going to just be in the skill document. You may have this kind of underlying knowledge architecture, which is actually something we could talk about.
What I really like about Claude Code, it’s forced me to organize kind of a big infrastructure behind the scenes of like, here’s a bunch of example files, here’s the knowledge blocks and things that you need to know about Smarter X as a company, about myself, about how I think, how I write, et cetera. So it will, long story short, a skill will tell what to do, how to do it, perhaps reference some of that knowledge other documents and files and knowledge as well.
Rich: It almost sounds like a hybrid between the knowledge docs and the instructions for GPT, the way that you’re explaining it right now.
Mike: That is kind of how I approach it. Yeah.
Rich: Okay. Awesome. So when you first started working with agentic AI, what were the first steps? Like you sat down and you said, I’ve got a problem to solve. Or you’re like me, where you’re like, let me go find a problem so I can solve it with AI and figure this stuff out. What were some of the first steps as you kind of entered that world?
Mike: I’ll be perfectly honest with you, my first step with Claude Code was both dismissal and avoidance, which I actually think is important to note because it took me forever. I knew about Claude Code a year ago and did nothing with it. Because I personally mistakenly, and I guess this isn’t really my fault, it’s called Claude Code. I mistakenly thought, I don’t do a ton of direct programming, I’m not building a ton of apps or software in my role, and I’m getting pretty good with GPTs, Gems prompting. Why do I need this? This doesn’t seem like it’s valuable for me.
I think I read a post, and I’ll have to find the link about it, several months ago that was like, look, you’re thinking about Claude Code entirely wrong way. It’s not just for programmers. Think of it more as a general-purpose AI agent. And as someone that is in this space and had been a bit behind, I’m like, I haven’t really used too many agents or built my own or anything. I was like, oh, okay, maybe there is something to this Claude Code thing.
So I finally fired it up in the app and in the terminal and started out just experimenting. Mostly, honestly, having it tell me… I was like, “Hey, here’s where I’m at. I don’t program. I’m not a developer. I’m very savvy with AI and have all these workflows. What do you need from me? Where should we start?” And so really what I started with, and this is why having all your prompts, your workflows, your knowledge bases documented, kind of what we’ve perhaps been talking about in the industry as best practices for a couple years, this is where it all comes to fruition.
Because I have all this stuff, all these workflows I run regularly using a combination of GPTs prompts, tools. I could take one of those, give it to Claude Code and say, “Hey, let’s work together to build out a full skill for this. So what I want to do is just click a button or say to you, hey, let’s go do the newsletter, or the blog post, or the whatever, and it’s done.” Take some back and forth to get to that point. But basically taking all the steps I have and condensing them into something an agent can just run with.
Rich: Okay. So when you first started this process, you were talking just back and forth with Claude, the general LLM, and that helped you work with Claude Coder. Did I misunderstand that?
Mike: Yeah, it’s actually a little bit of both. So because I was unfamiliar and uncomfortable really with Claude Code, I started in Claude, the regular chat. I eventually just started talking to Claude Code directly, both through the app and through the terminal window, because you’re still talking to an LLM, so you can still do the same kinds of things you do in the Claude Web app. So I default now to just talking straight to Claude Code and be like, “Hey, I’m trying to figure out more ways to use you and what you do well, can you help me out?”
Rich: Come up with a list.
Mike: Yeah.
Rich: Excellent. And you say the terminal, so you’re on a Mac, I’m guessing.
Mike: I am, yeah,
Rich: Me too. As are all talented, creative people. No, I’m just kidding. So you’re literally talking about going to the terminal window, which is something I only recently started doing for very specific cases. It feels like 1965 computer use with this weird prompt and I’m like, is it behaving the way…?
But, that seems scary and intimidating to a lot of people who are trying to run a business, that sell HVAC services or landscapers. What is the learning curve for this? Is this something that marketers and owners should be investing their time in, or is this something you bring in a specialist or an outside agency to help you set these up? Like what do you think the practical – for those people who aren’t like us, who are like, I’m just going to do it because I love this world – how much effort should they be putting in at this juncture in time?
Mike: I’ll be honest, if you asked me this a few months ago, or even a month or two ago, I probably would’ve said that you need help with this because your average person totally, understandably, does not want to mess around with the terminal.
I grew up in the eighties and nineties, sci-fi and cyberpunk stuff, so for me, I’m like, oh, this is great. But most people don’t want this experience. The great news is, don’t bother with the terminal at all to start if you’re in kind of the audience Rich just mentioned. Go use the Claude Web or Claude desktop app.
So if you have a Mac or Windows, go download the app, log into your account. You will see an option, a window, a tab, depending on the UI to use Claude Code rather than chat. You can experiment with it there to start. That’s a much easier way to get started, because if you start in the terminal window, it’s going to have you download stuff. You’re going to have to invoke Claude every time you use it. It’s going to be probably a little bit of a bridge too far and intimidating.
Even better is in the last month, Anthropic, the company behind Claude has something called Cowork. Cowork, like we kind of have alluded to, is just a version of Claude Code wrapped in a non-technical interface. So when you jump into that Claude desktop app, you’ll also see a tab called Cowork that is nice and pretty and easy to understand for us marketers, salespeople, business leaders over here. Then you can start giving it commands. Having it go do things genically on your machine by giving it permissions to access certain files or folders or areas on your computer and experiment with it that way.
That is the easiest, best way I would argue for a lot of business leaders to get started with the kinds of capabilities we’re talking about. If you never touch Claude Code and you just focus on Cowork, you’re getting the same types of agentic AI skills and capabilities that I think are going to be really important moving forward.
Rich: Awesome. You’re just getting an easier to use interface.
Mike: Yep.
Rich: Alright, so we’ve talked specifically about Claude. And are there equivalents in ChatGPT, and Gemini, and Microsoft?
Mike: Yeah. So in ChatGPT they have a tool called Codex, which is really your code generation, your AI coding tool. Again, because that is so specifically for coding and because I just kind of relatively recently got over my trepidation with Claude Code, I have not played a huge amount with Codex. But if you are interested in AI-paired programming, Codex would be something else to explore in ChatGPT.
ChatGPT also has, if you’re kind of in your standard chat window and you click the little plus button, you can go into an agent mode, which is a little different than what we’ve discussed. But agent mode, basically you tell it to go do something. Like for instance, the other day I was experimenting. I had all these random things bookmarked on X and I was like, going through and clicking all these to remove them is going to be a real pain. Maybe agent mode can do it for me.
So I click agent mode, say, “Go to X, log into my account and go uncheck or unmark all these bookmarks.” It will show you exactly what it’s doing in real time in a browser that it uses itself. It will then check with you like, “Hey, I need to log in. Can you gimme the login?” “Hey, am I doing this right?” Like you’ll have kind of oversight over it, but it will go work autonomously. But it’s a little bit different than Claude Code.
Codex is going to be much closer to a Claude Code within ChatGPT. And I’ll be honest, I’ve not explored as many of these capabilities yet in Gemini, though it does have plenty of them. Like I mentioned, deep research alone is an agentic type of capability. There are also other tools that Gemini has where you can start generating code and kind of working a little more agently and building your own agents.
Especially in Google AI Studio. I believe, and they also have like an agent studio at this point, I have not started to dive deeply into those. But all the major labs are roughly baking in these types of capabilities in somewhat similar ways at the moment, and especially as agentic becomes such a big thing. I would guess you’re going to see a very similar analogy to Cowork come out in ChatGPT and Gemini. In fact, it’s been only a little time, but I’m actually surprised they haven’t released something yet.
Rich: Alright, so you brought up a point that I wanted to talk about, because it’s the part that scares me the most. Which is giving access to these AIs. I know that there are guardrails built in. I know that there’s controls built in. But I was just reading something about OpenClaw right now, that’s all the rush. And somebody who is the scientist on the team let it manage her emails and it just deleted reams and reams of emails before she was able to reel it back in.
I think that’s the fear of a lot of us if we’re giving AI control. What are some of the guardrails or our own controls that we can build in to avoid a digital meltdown?
Mike: Yeah, that’s a really good question. I would say this is the number one through ten barrier that people have. Because these tools are still really new. They’re untested in a lot of cases, and they also can go wrong in ways that people don’t anticipate. The moment you start messing around with an agent, you start being really geeked out and say, “Whoa, it’s really cool it can go do all those things.” You also get kind of scared and say, “Oh my God, it can also go do all those things.”
So I would say first, just understanding as step one, that when you grant Claude Code, just let’s say for purpose of argument, access to a folder in a session you are working on, it can then go see what is in that folder. It can go use those files. And it can make changes to those things, including deleting things.
Now, Claude Code is going to be a little less of like, “Hey, go to my browser and log into my bank account.” Though plenty of agents you could allow to do that, so some common sense is important here. An agent almost always cannot go figure out your login credentials. If you give it to them, it’s like, don’t let it log in to things that you don’t want it to have access to.
This gets a little murkier though when you’re saying like, hey, maybe we should connect it to HubSpot. Maybe we should connect it to my email. It’s almost always, I would say, not going to go rogue Terminator Skynet and start saying, “Hey, I’m going to delete all Mike’s emails.”
OpenClaw is a whole other thing, because OpenClaw has no guardrails built in. Typically, the tools from the commercial labs will have at least some guardrails where they will ask you pretty frequently, “Are you sure you want to do this” “Do I have permission to do this?” “Can I access this file?” You say yes or no, or what have you. So that’s helpful.
But I’ll tell you, even someone that uses Claude Code half the time, I’m like, I think that I know what it’s asking for. But am I positive? Again, I’m not a developer. It says, can I run this code? Sure, why not? Like I don’t know the difference. So there’s a huge danger there. So I would be really, really careful about what you point it towards.
I would absolutely start by saying, I created a folder on my desktop, it only has access to that. That’s where I’m going to start experimenting so you can really get your feet wet. Because the moment you start connecting it to different accounts, the moment you start connecting it to different file or code bases, you are off to the races and kind of the wild west a little bit with what can go wrong.
So I would say do not run before you can walk for sure. And you know, we could talk about OpenClaw, but a lot of the horror stories with OpenClaw are because you have this open source AI agent, people install it, and it has no guardrails. It is extremely autonomous. So it’s a more extreme example of what can go wrong. But very real dangers and possibilities there.
It’s good, it’s illuminating, but don’t worry, it’s not going to be like open Claude and be like, “Hey, you told me to go make you a dinner reservation. I emptied your bank account.” Paying people off to go get the best restaurant table. You’re not going to run into that with Claude Code. Doesn’t mean there’s not serious dangers, but at least you don’t have to worry about that kind of sci-fi scenario.
Rich: I think your idea of giving it access to one folder to get started, almost like here’s your sandbox to play in, is a great way for you to start to understand what it can and can’t do. And that’s great advice.
Mike, what surprised you the most once you started to add agentic AI into your workflow?
Mike: What surprised me most is how my work style changed. So for a very long time, I’ve always been hyperaware, because it’s a failing of mine, that I get too bogged down in multitasking. So I sit in front of my computer, I work on something a little bit, get distracted, or a thought occurs to me, new idea or shoot, I got to go put out that fire. And I contact switch. I have to jump to another thing, and it kills me. My work is way worse when I contact switch.
Instead, I’ve tried for the last few years like, hey, let’s really protect our time, do very focused, single tasking, deep work blocks. It’s been amazing. It’s wonderful. Guess what? That gets all thrown out the window with agents because now you can manage one or more windows where agents are working autonomously on maybe related things, maybe separate things, but this technology rewards orchestration.
So I’ll start one project. While it works, it might ask for some permissions. I’ll say yes, it starts working. It can take a few minutes for it to get through each step it’s working on. So while that happens, I’ll then go open another window, fire up another project. You don’t have to work like this, but it is tempting to when you have agents. So you really do find yourself in this interesting, almost call it like a video game. If I’m managing a bunch of different digital colleagues versus sitting down and doing one thing.
Now that’s not all the work I do. I do plenty of deep work and single tasking still on stuff that needs a human. But when it comes to agentic AI, it’s not just me sitting having a single long chat with ChatGPT on one thing. It’s doing that across several different chats, agents, entities. It’s a very interesting, different way of working for me.
Rich: Tabs and tabs of minions working on different tasks.
Mike: That’s exactly it.
Rich: It’s so funny because when you started your answer, I thought you were going to say how focused I’ve become because of this. And I’m thinking to myself, the number of deep research tabs that I have open at any given time where I’m like, I got to go back and finish that. But I’m glad to see that we’re still in the same boat.
Mike: I’ve literally forgotten about projects that an event was doing. And I’m like, oh no, I have all these great files of this thing I was working on that I’ve just deleted, that we are working on. So that’s my fault. I got to have fewer agents working on something. But, real danger.
Rich: Yeah. So what are some ways that small teams can start experimenting with these tools, the agentic AI, AI workflows, without needing to become developers?
Mike: Yeah. I think just because of it’s accessible, I would say if you can, if you have a paid Claude account through Anthropic, I’d say try out Cowork. Start a folder on your desktop specifically for a single project you want to work on with Cowork. Drop a few files in there.
For context, let’s just use as purpose of argument, let’s have Cowork write and format a blog post for us in HTML. And I want to say, okay, I’m going to start this folder on my desktop called, Claude Cowork Blog or whatever. Put in a few files that have examples of my writing style, maybe even copy in a doc with the HTML of what the blog post looks like on our website. Drop those in. Maybe a style guide. Typical kinds of things you talk through with a chat bot.
And then instead, fire up Cowork, log in and point it at that folder and say, “Hey, look at the examples. Look at the HTML. Write for me a perfectly formatted blog post in our style and tone on xxxx”, whatever example you want to pick. Start with that small example and just get used to trying it out and understand. Because you will have to take a few reps to say, oh, okay, I have to give this permissions. Oh, okay, I have to point it at different folders. It may ask for different things or files. And then from there I would just watch how it works through things.
It won’t always show you the exact chain of logic it’s taking. But it is fascinating and educational to see how these tools work. And start thinking through what’s different here than chatting back and forth with something like ChatGPT.
Rich: Mike, there are a lot of opinions about AI right now, and there’s a lot of ethical concerns out there. There are some people who say, this is the next greatest thing, evolution of mankind. And others who think that this is the end of the world or the end of work and all this sort of stuff.
What is one hot take or one opinion around AI where you’re like, that is so completely wrong, and what is it and what is your take on that?
Mike: I guess my hot take, I’m going to call out this one study, but we’ve already done it, so I think they’re already upset with us probably. But there’s this great MIT study that said 95% AI projects fail. This got so many headlines. It’s MIT, you know you’re talking about. And it ends up being a study that’s based on 50 or so qualitative responses of surveys they did of people, that they go into the whole methodology.
We have a whole podcast episode on it, and it’s interesting. It’s like, okay, you’ve learned some cool stuff about these 50 organizations. It’s not like garbage research, but you can’t use a methodology like that and then say 95% of AI projects or pilots fail. Yet everyone took this and ran with it being like, everyone wants to confirm. “See, I told you so”. That AI is overhyped. The technology isn’t there and it doesn’t work.
There are plenty of limitations, dangers, and concerns with AI. I lose sleep over them every month. Don’t worry. I’m not here saying rah rah, AI all the time. There’s very real things we have to figure out. But I can promise you the technology is not behind where people say it is. It may be unevenly diffused throughout organizations. It may be unevenly competent in certain areas. But when people are telling you this is getting to human level at certain types of knowledge work, that is very true. And it is not slowing down despite, again, what people want to hype up or say.
So while there’s a lot of nuance to this, anyone who’s saying, well, guess what? AI doesn’t work. The issue is not the technology, it’s usually a people and an organizational problem.
Rich: Well, it’s funny. I didn’t hear that episode on your podcast, which I really recommend people check out. But I will say that if I had, and I wasn’t familiar with this study either, but if somebody brought that to me, I’d say only 5% of small businesses make it past year five. So does that mean that 95% of businesses fail? Yes. And that’s okay. Like literally failure is part of the human experience. I would hope that 95% of those AI pilots failed so that we can continually get better and learn things from them. That is not a failure, that is an experiment with a result.
Mike: I love that.
Rich: Take that, MIT. That’s all I have to say about it.
Alright. Awesome. A lot of great stuff. Mike, if people want to learn more about you, SmarterX, where can we send them?
Mike: Go to smarterx.ai. You can learn all about our company and what we’re doing with AI education and transformation.
You’ll also find a link on that website, or you can go search for yourself, The Artificial Intelligence show is our podcast co-hosted by myself and our CEO. Every week we break down all the news we can that is going on in AI, and we’d love for you to take a listen if you haven’t already.
Rich: Awesome. Mike, thanks so much for coming back to the show. I really appreciate your time.
Mike: Rich, thanks for having me.
Show Notes:
Mike Kaput and his team at SmarterX helps businesses understand and apply AI in practical, strategic ways. Be sure to tune into his podcast, where he regularly breaks down the latest news in AI. And grab a copy of his book, Marketing Artificial Intelligence.
As President of flyte new media and founder of the Agents of Change, Rich Brooks brings over 25 years of expertise to the table. A web design and digital marketing agency based in Portland, Maine, flyte helps small businesses grow online. His passion for helping these small businesses led him to write The Lead Machine: The Small Business Guide to Digital Marketing, a comprehensive guide on digital marketing strategies.