AI Topics, broad and focused

AI Topics, broad and focused

AI is relevant to every national, economic, and technical discussion these days.  We’ve only scratched the surface of the implications and benefits.  Here are a few things I am thinking about, ranging from the national level to the neuron level.

AI Action Plan

The administration’s AI Action Plan dropped this week. A lot of people think the AI Plan is good.  I guess it isn’t bad, like tariff-plan bad, which is good.  But it lacks the punch of the Apollo program; it lacks a clear, obvious “let’s land on the moon” goal.  It will be hard to rally people around.  

I learned a lot about leadership from the Windows 95 effort and from Brad Silverberg and David Cole.  Leaders have to boil large, complex programs down to a small set of pithy goals, repeat those over and over again, and get out of people’s way, allowing people to chase those goals.  Brad was great at this, David was too.  The AI Plan is not yet good at this.

There are also some notable big holes.  One obvious one is immigration — when you look at the talent working on AI today, many of the researchers are non-US natives.  In the past, we did a great job attracting them to the US.   What is our plan now?  We should absolutely educate US natives, and we should scoop up brains from around the world as well.

AI and Tech are driving the economy

I remember when the software business was an asset-light business.   You could start a company with a couple of laptops in a coffee shop.   No more!   The investment, asset, and energy requirements of the sector continue to escalate, which is why an AI Action Plan is a worthwhile consideration.  I am continually surprised at the scale of what is happening.

Startup funding — I remember when $1-2M was a reasonable first raise, and $10M was a lot.   Now:

Anthropic is reportedly in talks to raise up to $5 billion in a new funding round, which could bring its valuation to over $150 billion. The potential raise reflects its rapid growth and increasing market leadership in foundational AI.

Paul Kedrosky on AI Capex:

We are in a historically anomalous moment. Regardless of what one thinks about the merits of AI or explosive datacenter expansion, the scale and pace of capital deployment into a rapidly depreciating technology is remarkable. These are not railroads—we aren’t building century-long infrastructure. AI datacenters are short-lived, asset-intensive facilities riding declining-cost technology curves, requiring frequent hardware replacement to preserve margins.
And this surge has unintended consequences. Capital is being aggressively reallocated—from venture funding to internal budgets—at the expense of other sectors. Entire categories are being starved of investment, and large-scale layoffs are already happening. The irony: AI is driving mass job losses well before it has been widely deployed.

Tom Steyer on energy policy:

AI data centers need massive amounts of electricity—and they need it now. The AI boom will triple data center energy demand by 2030.
Meanwhile, wind and solar are the cheapest electricity we can build. Not the cleanest. The CHEAPEST. In most of the country, new solar costs half what new natural gas does — and it comes online in less than half the time. That’s not “climate dogma,” that’s free market capitalism deciding what’s best.

Every industry is now driven by tech.  Energy is increasingly driven by the tech industry.  Finance is driven by the tech industry.  Technlogy shifts in automobiles are upending that industry.  How long before the space launch business becomes dominated by the tech industry?  With Starlink it is already heading that way; wait until we figure out how to put solar collectors or nuclear power plants in orbit.

And tech is now driven by AI.  It is good that the administration is putting AI front and center; we just need to keep tuning the plan.

AI coding tools: great, but not close to done

I spent time this week using three different AI coding tools: GitHub Copilot, Claude Code, and Amazon Kiro.  I asked each to help with three different coding tasks — re-implementing my blog (www.theludwigs.com), creating a Monte Carlo analysis to evaluate a portfolio, and writing a smart home app to report temperatures from sensors around the house.  I wanted to try Github Spark but it is not available to me yet.

TL;DR: all these tools work, and I won’t code without an assistant going forward.  But we are nowhere close to the endgame.  I ultimately don’t want an app composed of a bunch of brittle static code, even if an AI wrote most of it.  I want the AI to replace the app and dynamically create the views and flows I want, when I want them.   I don’t want just a timeline view of my blog — I want to see a word cloud view, I want the top new book recos based on what I have already read, I want a feed of just history books, I want to see my shift in genre consumption over time, etc, etc, etc.  I want the content to be viewable on a modern PC, a phone, or any other future device, without having to create explicit code to render it.  I don’t want to have to imagine all these views a priori; I want my blog software to be flexible enough to do any of this and more.  I don’t want AI at design time, I want it at runtime.  We aren’t there yet from a feature or cost perspective.

Along the way, a lot of my hard-earned coding knowledge will become irrelevant – as Ethan Mollick notes, we probably just need to get out of the way of the AI:

The lesson is bitter because it means that our human understanding of problems built from a lifetime of experience is not that important in solving a problem with AI. Decades of researchers' careful work encoding human expertise was ultimately less effective than just throwing more computation at the problem. We are soon going to see whether the Bitter Lesson applies widely to the world of work.

And this is OK.  We have a whole new skill to learn: how to guide and manage these AI assistants.  The endgame is an entirely different approach to building the apps we use, shifting more of the design and runtime responsibilities to the AI.  

The remainder of this section details some of my experience with the tools this week.


All these tools are IDE-centric — Copilot is right in VS Code; Claude Code has a VS Code plugin; Amazon Kiro is a fork of VS Code.  I want an IDE solution — generating code with say ChatGPT outside an IDE and constantly having to roundtrip it into the IDE is not a good experience.  

None of these tasks is particularly hard, and each of the tools was able to come up with solutions.  Some impressions of each:

  • I am probably not a customer for Kiro.  For the portfolio forecasting app, it created a detailed specification, a detailed implementation plan, and then developed a complex solution with a TypeScript backend API server, a JavaScript/CSS frontend, unit tests for every module, and comprehensive documentation for it all.   It took the better part of 36 hours to complete, with constant bouncing around as it wrote code, passed unit tests, integrated code, tested against system tests, etc.  A ton of prompt and token submissions, I can’t imagine what this costs to make — thankfully, they aren’t charging yet.  On the one hand, I am very confident in its code, as Kiro spent a lot of time on test frameworks and test code.  On the other hand, it wrote so much code, I have no idea if it is all correct.
  • Claude Code is great.  For the portfolio app, it developed a pretty simple Python CLI app.  It did some testing but didn’t overthink it.  I had a working app in 45 minutes.  For the blog rewrite, it did an adequate but incomplete job.  Each new blog post would have required me to re-edit the source rather than just dumping a post into a directory.
  • GitHub Copilot is good.  It did the best job on the blog app, building a simple equivalent to Jekyll/Hugo that could be deployed easily to GitHub Pages.  Copilot built the app quickly and quickly addressed issues as I pointed them out — for instance, a poor pagination design initially.  

Going forward, I won’t code without AI assistance.   I have no time to deal with the details of various libraries and frameworks, to understand all the ins and outs of CSS, etc.  This is what computers are for.  These tools aren’t perfect, but they are the most exciting thing to come along in software for quite a while.

The more precise you are in your requirements and the more precise you are in your feedback, the faster the work goes.  And the tools are great at basic debugging — but I have no idea if the huge Amazon Kiro portfolio app is actually doing the right math, which is still a human concern.

AI at the edge

I dipped back into edge computing and smart home applications this week.  I wrote a small app to collect temperature readings from HomeKit and Matter devices.  What a f&*king morass of schemas, security regimes, hierarchies, etc.   The introduction of each new standard, like Matter, just makes the problem N+1/N more complicated.  A prime opportunity for an intelligent agent to dynamically assemble all the underlying info in my house, and present it to me in whatever view I want.  But so far, the smart home vendors don’t get it.  

From Nilesh Jasani, Edge computing didn’t pan out:

The failure of the edge was not simply a stall. It was a retreat. As the edge faltered, the center grew unimaginably powerful, growing at nearly 30% from already astronomical levels, creating a feedback loop that pulled even more of the world’s computational work back into the cloud.

Edge computing has been a fever dream for a while now.  We were excited about it at Surround.io — the vast amount of cheap compute power is so alluring.  The utility of the cloud, and now AI in the cloud, coupled with the deployment and management complexity of edge devices, has continued to doom attempts at edge computing. 

AI shorts

Simon consistently writes interesting posts about tools — Using GitHub Spark to reverse engineer GitHub Spark.  The details on the spark system prompt are fascinating.   Thank goodness computers can figure out things like system prompts for us!

Meta Unveils Wristband for Controlling Computers With Hand Gestures — a fascinating use of AI to capture intent from neurons and the electrical signals flowing through them; congrats to Reardon et al.

Subscribe to Tech Can Be Better

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe