2 episodes taggedApproximate match across all podcasts
Home/Tags/ADOPT AUTO RESEARCH

ADOPT AUTO RESEARCH

All podcast episode summaries matching ADOPT AUTO RESEARCH — aggregated across every podcast we track.

2 episodes · Page 1/1

Quotes & Clips tagged ADOPT AUTO RESEARCH

14 on this page

Liquid AI architectures outperform transformers for low-latency search

Liquid neural networks are you can think of them as a next step, like, sort of, state space model square. It's non transformer architecture that's more complicated than state state space and really difficult to code if you if I'm being honest, but it's, very efficient. It's, sublin sub quadratic in in length of your context. It's very compact way to represent things.

Mikhail Parakhin

Content-addressed caching creates network effects in data engineering

The main savings are coming from the fact that you ran it, you get your job done, and you moved on. Then somebody else in some department you don't know existed runs the same task, but on a newer version. Right now, in most of the organizations, you can't even find out about it so that you can't even measure that you're spending that time twice. Here, if everybody's entangled, that's detected automatically and detected that the output is the same.

Mikhail Parakhin

Unlimited token budgets prioritize high-quality model critique loops

It's not about just consuming tokens. You can consume tokens, and and, in fact, the anti pattern is running multiple agents too many agents in parallel that don't communicate with each other. That's almost useless, compared to just, fewer agents and burns tokens very efficiently. Setting up the right critique loop, especially with the high quality models where one agent does something, the other one, ideally with a different model, critiques it, suggests ways to improve it.

Mikhail Parakhin

Simulated customer data provides a massive competitive moat

Shopify has decades of history of how people made changes and what there is, the what it resulted in terms of sales. Now what we can do is we have this it's not it's a noisy data. It is a small, usually, websites. But if you aggregate and general, like, everything together and you apply, denoising and collaborating filtering like approach, you can extract a very clear signal. And then you can optimize your agents.

Mikhail Parakhin

Auto-research loops outperform human optimization through sheer volume

If I were doing 400 experiments myself, my betting average would have been much higher, I'm sure. But, also, it first of all, it would take me, like, three years to do 400 experiments. And, I didn't have to do them. Like, the machines were just the price of electricity did that. And I got one improvement, that in my honestly, when I was starting that experiment, my thinking was to go and show that, hey, Andre. Maybe you just don't know how to optimize.

Mikhail Parakhin

Content-addressed caching creates network effects in data engineering

The main savings are coming from the fact that you ran it, you get your job done, and you moved on. Then somebody else in some department you don't know existed runs the same task, but on a newer version. Right now, in most of the organizations, you can't even find out about it so that you can't even measure that you're spending that time twice. Here, if everybody's entangled, that's detected automatically and detected that the output is the same.

Mikhail Parakhin

Unlimited token budgets prioritize high-quality model critique loops

It's not about just consuming tokens. You can consume tokens, and and, in fact, the anti pattern is running multiple agents too many agents in parallel that don't communicate with each other. That's almost useless, compared to just, fewer agents and burns tokens very efficiently. Setting up the right critique loop, especially with the high quality models where one agent does something, the other one, ideally with a different model, critiques it, suggests ways to improve it.

Mikhail Parakhin

Reviewing code is the primary bottleneck in agentic workflows

The real problem is not in spending time waiting for PR. It's real problem is since there's so much more code, then probability of at least some tests failing going up. And then you, like, keep failing, then you have to find the offending PR, evicted, retest it without that PR. And so deployment cycle becomes much longer.

Mikhail Parakhin

AI tool usage has reached near-universal internal adoption

This is number of daily active workers. You know, think of, DAO, basically, daily active users of AI tool as a percentage of all the people in the company. And, you could see that it approaches really a 100% by now. It's hard not to do your job now without interacting deeply at least with one tool.

Mikhail Parakhin

Liquid AI architectures outperform transformers for low-latency search

Liquid neural networks are you can think of them as a next step, like, sort of, state space model square. It's non transformer architecture that's more complicated than state state space and really difficult to code if you if I'm being honest, but it's, very efficient. It's, sublin sub quadratic in in length of your context. It's very compact way to represent things.

Mikhail Parakhin

AI tool usage has reached near-universal internal adoption

This is number of daily active workers. You know, think of, DAO, basically, daily active users of AI tool as a percentage of all the people in the company. And, you could see that it approaches really a 100% by now. It's hard not to do your job now without interacting deeply at least with one tool.

Mikhail Parakhin

Auto-research loops outperform human optimization through sheer volume

If I were doing 400 experiments myself, my betting average would have been much higher, I'm sure. But, also, it first of all, it would take me, like, three years to do 400 experiments. And, I didn't have to do them. Like, the machines were just the price of electricity did that. And I got one improvement, that in my honestly, when I was starting that experiment, my thinking was to go and show that, hey, Andre. Maybe you just don't know how to optimize.

Mikhail Parakhin

Reviewing code is the primary bottleneck in agentic workflows

The real problem is not in spending time waiting for PR. It's real problem is since there's so much more code, then probability of at least some tests failing going up. And then you, like, keep failing, then you have to find the offending PR, evicted, retest it without that PR. And so deployment cycle becomes much longer.

Mikhail Parakhin

Simulated customer data provides a massive competitive moat

Shopify has decades of history of how people made changes and what there is, the what it resulted in terms of sales. Now what we can do is we have this it's not it's a noisy data. It is a small, usually, websites. But if you aggregate and general, like, everything together and you apply, denoising and collaborating filtering like approach, you can extract a very clear signal. And then you can optimize your agents.

Mikhail Parakhin

More clips tagged ADOPT AUTO RESEARCH?

Get a daily email of the best quotes & audio clips from the top podcasts.

Subscribe for daily Quicklets