OpenAI leverages Infosys to expand global enterprise reach
βOpenAI and Infosys just announced a deal to push ChatGPT and specifically Codex. Their coding tool is kind of a competitor to Cloud Code into Infosys' enterprise client base. This is across more than 60 countries. They didn't really disclose the terms or how much money is changing hands in this. But basically for context on Infosys, they did about $267 million in AI service revenue last quarter. OpenAI gets a channel into Fortune tier accounts that they don't reach directly.β
Train agents with persistent memory and correction files
βThe most important thing is when you're setting up these agents, you want to set its own soul.md, so give it a personality, talk about what you like, what you don't like. Memory.md, so for persistence, obviously you want it to remember things as well. And lessons.md, too, you don't want self-correction after mistakes, right? So you don't want it to just, you make a mistake and you keep making the same mistakes over and over.β
Thinking Machine Labs enters massive Google compute partnership
βGoogle Cloud has just signed a multi-billion-dollar deal with Miriam Moratti's Thinking Machine Labs. This is a single-digit billions. It gives Thinking Machine Labs access to NVIDIA's GB300 system on Google Cloud, plus training and deployment services for their first product, which is called Tinker, which is a tool for building custom frontier models. Thinking Machines is raising at a $12 billion valuation, and they run a heavy reinforcement learning workload, which is basically very insanely compute-intensive.β
Optimize the Stack - NVIDIA has shifted from being a chip designer to a systems company, utilizing 'extreme co-design' to treat the entire data center as a single, integrated computer.
βThe computer of the future is the data center, and the data center is the computer.β
Anthropic cybersecurity tool exposed via third-party contractor credentials
βAnthropic's response to all of this is that they're investigating, and so far they say they found no evidence that it touched Anthropic's own systems, just the vendor environment. I think this is interesting because this is where a lot of these security incidents are happening right now. It's not really a model exploit, it's just a contractor credential plus a predictable URL pattern. And my advice that I would give to people is that if you are a company that is running AI tools, make sure that you are auditing who is on your vendor side.β
10xScience automates drug candidate triage using AI agents
βWhat they're doing that's so fascinating to me is that basically there's this problem where models like DeepMind's protein predictor, they're spitting out thousands of drug candidates. So there's just thousands and thousands of these drug candidates. And there's a huge bottleneck in pharma where it's not just about getting all of these candidates, but it's actually triaging them. It's actually figuring out like which of all the candidates is worth pursuing to try to make medicine or therapeutics. 10x science is basically just building a SAS layer on top of that.β
βInstead of having four humans that has, maybe a task might take you three weeks or so, like a cold email campaign might take you two, three weeks. Maybe it's three people helping you. Now you have one human that can probably do it in a day or two, right? And that's being generous. Probably takes a day. You maybe need to have some reviews done, but then you're often sending the emails the next day. Just the human timelines don't work anymore.β
Chrome becomes an AI coworker via Gemini integration
βGoogle announced is that they're turning Chrome into an AI co-worker. So they have a new feature, which is called Auto Browse, Gemini powers it, and it's basically running inside of your Chrome for workspace, and it reads context across your open tabs, and it automates actual workplace tasks. It can enter your CRM data, it can compare vendor quotes, it can summarize candidate portfolios, it can write up competitor research. If I'm being honest, it sounds like an interesting tool. It sounds useful, and I've actually tried some similar things.β
New TPU chips claim superior performance per dollar
βGoogle announced two new TPUs. They have the TPU 8T for training, and the TPU 8I for inference. The split alone is, I think, what's really interesting to me. Inference is the dominant cost of running AI and production right now, and having these kind of dedicated inference, silicon matters economically. It makes a big impact to the bottom line. So Google's claims is that it is three times faster training and 80% better at performance per dollar against the NVIDIA alternatives, and the ability to scale more than a million TPUs in a single cluster.β
AI agents replace traditional sales and marketing roles
βThis is an agent that helps us on sales across the board. It will actually create all the cold email sequences, it'll come up with all the leads, it'll scrub the leads, it'll de-duplicate the leads as well, and it will send it on a sequence, and it constantly iterates over time. It will constantly self-improve. And so this agent we have, that's in our stack, we have a stack called Single Brain, which is all of our revenue agents.β
Master Natural Language - The future of programming is no longer C++ or Java; natural language is becoming the primary interface, effectively turning every human into a potential developer via AI.
βThe computer of the future is the data center, and the data center is the computer.β
Google executes a strategic three-layer AI stack move
βThe thing that I think ties all of these together is that Google is running a three-layer strategy. The bottom layer is silicone, TPU, plus these resold NVIDIAs. And then they don't really care which chip wins. I mean, obviously, they kind of want their own to win, but they're going to make money either way if you buy NVIDIA from them. The middle layer is being the compute host for Frontier Labs. Anthropic already runs on TPUs, and now Thinking Machine Labs is going to be running on Google Cloud. The top layer is the agent layer in the browser and the workspace. Google is the only player that's hitting all three credibly right now.β
NeoCognition develops agents that specialize like human workers
βNeoCognition is trying to build agents that self specialize the same way instead of kind of the current model where you hand, you know, craft a custom agent for every vertical. I've built enough custom agent workflows to know that kind of this per vertical approach doesn't really scale. You run out of engineers before you run out of use cases. So if NeoCognition can actually ship an agent that learns the rules of a new environment, and if it's doing this on its own, which I think is definitely gonna be the key, I think that is a massive win.β
βNot only that, it generated, we're talking about a lead from a multi-billion dollar company, actually two multi-billion dollar companies, is interesting, right? Because you're not just talking about generating views, you're actually talking about generating pipeline, and the agent is helping you do that. I'm not saying you should just let your agent YOLO and do whatever it wants all the time, but you should let it get you to the point where a human needs a review, and then you're okay to publish it.β
βYou have cron jobs that are firing at 2 a.m., which is what we have, that will ingest all of my content from my podcast, my YouTube channels. We have Cold Outbound launching on Saturday, sending thousands of emails before Monday, right? You can do that. We have Deal Resurrection crons that will find the right time to reach out to people that maybe we lost the deal 60 days plus in the last couple years or so.β