NHacker Next
login
▲A conversation about AI for science with Jason Pruetlanl.gov
154 points by LAsteNERD 17 hours ago | 136 comments
Loading comments...
fumeux_fume 10 hours ago [-]
> If you’ve played with the most recent AI tools, you know: They’re very good coders, very good legal analysts, very good first drafters of writing, very good image generators. They’re only going to get better.

Most of the bullshitters will tip their hand pretty early that they're just hype men for AI. Right off the bat, the fact that AI is disruptive and transforming society is apparently self-evident because they never cite a single premise or event to back this up. In the quote above, the phrase "if you've played" stuck out to me. Yes if you play around with them a little it's easy to believe they're really good at so many things. When you stringently evaluate them, you begin to see they make a lot of mistakes and perform inconsistently on even trivial tasks.

saretup 9 hours ago [-]
You sound more prejudiced against AI than they seem to be hyping AI in that sentence.

Yes, they’re very good coders; no, they aren’t perfect. Sometimes they make trivial mistakes or hallucinate, and other times they have unique insights about difficult problems or one-shot a lot of complex tasks.

If you don’t see the value of AI even at this point, you’re either lacking imagination and/or just too stuck in your own ways.

sampullman 9 hours ago [-]
I appreciate the value of AI in my own work, but "they’re very good coders" and "they make trivial mistakes or hallucinate" seems incongruous.

For me, AI tools shine when I know enough about a topic to quickly error check, but not enough where I can code fluently without documentation. I'm sure it will get more useful over time, but that's where it's been for me for the last year or so.

saretup 9 hours ago [-]
> "they’re very good coders" and "they make trivial mistakes or hallucinate" seems incongruous.

It seems that way if you judge them the same way you would human coders, but they’re different. They might be able to do things that veteran coders can’t without spending days on it, and fail at things that beginners can do in half an hour.

A car might not be able to traverse difficult terrain as well as a horse, but it doesn’t mean the car is not a good mode of transportation.

amelius 2 hours ago [-]
The problem I have with AI is that you cannot reason with them after they make mistakes. They always reply with "Good point!" and then they give me another iteration of bad fixes. You really have to know when you should give up and do things manually.
chairhairair 1 hours ago [-]
Gemini 2.5 is much more confident.

Anecdotally, I think this behavior is undesirable for most commercial LLM use cases. I have several friends that have complained about Gemini’s “back talking” and prefer ChatGPT’s relative sycophancy.

amelius 23 minutes ago [-]
I was talking about ChatGpt, but other llms also have this problem. I don't really care how polite they are, etc., I just want to get the job done, but the llm often gets stuck at some point.
sampullman 8 hours ago [-]
I'm not sure if the analogy fits. I would say AI is more like a fast car that turns the wrong way every couple of blocks, or a car that randomly refuses to go on certain roads. Is it still a useful mode of transportation? Sometimes, but, it's not reliable.

The sibling comment says it better - it's about predicability of mistakes, and the effort and knowledge necessary to correct them.

Maybe the next generation of models will solve these problems, who knows.

fsflover 4 hours ago [-]
> AI is more like a fast car that turns the wrong way every couple of blocks

So if you drive well and can control this "car", you are able to fix that and benefit from the fast "car" a lot.

> or a car that randomly refuses to go on certain roads

This doesn't seem like a good comparison. AI will do anything, it may just be wrong.

sampullman 2 hours ago [-]
The AIs that I use will definitely not do anything, and it the limits are not always obvious without a lot of prodding.

I'm fine with ditching the car analogy though.

a_bonobo 8 hours ago [-]
Yes, human coders are predictably 'bad' - it scales roughly with the complexity of the task and the years of their experience, along with some randomness ('brain farts'). LLMs are completely unpredictably 'bad' - sometimes a task that a high schooler can solve sends an LLM into a spin, sometimes an LLM solves a task that would require a PhD in the field. The unpredictability is what makes these tools so unfit for purpose.
JimDabell 3 hours ago [-]
> LLMs are completely unpredictably 'bad' - sometimes a task that a high schooler can solve sends an LLM into a spin, sometimes an LLM solves a task that would require a PhD in the field.

That doesn’t mean they are unpredictably bad, it means assuming that they behave like humans is a bad way of predicting their behaviour. “Unlike humans” and “unpredictable” are not the same thing. If you spend time working with them, you get a better sense of what they are good and bad at and get better at predicting their behaviour.

groestl 7 hours ago [-]
> The unpredictability is what makes these tools so unfit for purpose.

Also, their eagerness to produce. A good employee (senior or not!), knows when they're at the end of their wits and say so. In areas where LLMs are knowledgeable, but not knowledgeable enough, they'll still produce. They're stuck at unconscious incompetence, kind of like a systematic Dunning-Kruger effect. To solve that, we can either increase competence (moving out the area where this effect is noticeable), or maybe introduce some kind of feedback, self reflection (one could say, some kind of conciousness [0]).

[0] inspired by "Goedel, Escher, Bach"

const_cast 8 hours ago [-]
I don't think they're good coders at all, I think they're good at generating small snippets of codes.

If you tried to replace a programmer with whatever LLM it would completely flop. Just write a script to fetch Jira tickets and let it rip - it won't work well. Because it doesn't know that maybe the Jira ticket is stupid, or that if they implement this change it requires insight from Team X, or that this bug is really intended behavior, etc.

Being a programmer is much more than, like, making a neat regular expression or generating slightly customized boilerplate. As it stands, these are assistants in which programmers guide and handhold them the whole way. They are not coders.

KolibriFly 6 hours ago [-]
The gap between demo impressive and reliably useful in real-world scenarios is still huge in a lot of cases. That said, I think the concern isn't that AI is flawless now, but that it's improving at a pace we haven't really seen with other technologies
bobxmax 1 hours ago [-]
there are hundreds of thousands of developers, lawyers, artists, musicians, etc who have transformed what they're doing with AI
__loam 5 hours ago [-]
It's really funny to see someone say this technology, which seems to be plateauing with marginal improvements after a few years, is improving at a pace we have never seen before, in an industry that is built on the microprocessor. Astoundingly ignorant.
mallardgryph 2 hours ago [-]
The marketing for it all has been something else. It's a technology that demos extremely well, and is surface level very impressive, and to those who wern't paying any attention to its evolution, appeared to come out of nowhere.

But this line of "its only going to get even better" is a mantra that's endlessly expounded on, a brain worm even. It's never backed up with any observable evidence. It's marketing that they're tricking people into repeating.

keiferski 8 hours ago [-]
I don’t understand this viewpoint at all. Even if AI tools are nothing more than complex autocomplete systems - and not some kind of new consciousness - that alone is enough to dramatically shift entire industries. And it already is doing so…this isn’t theoretical anymore.
vanschelven 6 hours ago [-]
Not OP, but I think the viewpoint here is simply "if someone makes an evidently false claim ('great coders') right off the bat, it's enough to color the reading of the rest of the article. That's independent of what you're saying
keiferski 6 hours ago [-]
Fair enough, but not sure that's much better of a viewpoint. It might be a worse one. The off-hand comment was "very good coders" (not "great") and dismissing the person's entire thought process because of a nitpick is not a good way to learn anything.
3 hours ago [-]
bobxmax 1 hours ago [-]
I'm sick of this forum's lame performative cynicism masquerading as depth—it's a lazy cop-out that spares you people from the uncomfortable, demanding work of actually building something better.

Technology that any of us 5 years ago would've thought was a hundred years away and all you read is moving goalposts from ornery developers in denial.

Silicon Valley is cooked.

hbartab 17 hours ago [-]
> We certainly need to partner with industry. Because they are so far ahead and are making such giant investments, that is the only possible path.

And therein lies the risk: research labs may become wholly dependent on companies whose agendas are fundamentally commercial. In exchange for access to compute and frontier models, labs may cede control over data, methods, and IP—letting private firms quietly extract value from publicly funded research. What begins as partnership can end in capture.

monkeyelite 16 hours ago [-]
> research labs may become wholly dependent on companies

They already are. Who provides their computers and operating systems? Who provides their HR software? Who provides their expensive lab equipment?

Companies are not in some separate realm. They are how our society produces goods and services, including the most essential ones.

klabb3 11 hours ago [-]
Sorry for meta but.. this is one of those rare cases where both the argument and the rebuttal are worth agreeing with.
hdivider 16 hours ago [-]
I fail to understand the sentiment here.

This is the intention of tech transfer. To have private-sector entities commercialize the R&D.

What is the alternative? National labs and universities can't commercialize in the same way, including due to legal restrictions at the state and sometimes federal level.

As long as the process and tech transfer agreements are fair and transparent -- and not concentrated in say OpenAI or with underhanded kickbacks to government -- commercialization will benefit productive applications of AI. All the software we're using right now to communicate sits on top of previous, successful, federally-funded tech transfer efforts which were then commercialized. This is how the system works, how we got to this level.

hbartab 1 hours ago [-]
If this were just about tech transfer, in which private firms commercialize public research, I agree. But that's not what Jason Pruet is saying. In the Q&A he notes:

> “Why don’t we just let private industry build these giant engines for progress and science, and we’ll all reap the benefits?” The problem is that if we’re not careful, it could lead us to a very different country than the one we’ve been in.

This isn't about commercialization, it's about control. When access to frontier models and SOTA compute is gated by private interests, academics (and the public) risk getting locked out. Not because of merit, but because their work doesn't align with corporate priorities.

dekhn 13 hours ago [-]
What do you mean universities can't commercialize in the same way (I may have misunderstood what you meant)? Due to Bayh-Dole, Universities can patent and license the tech they develop under contract for the government- often helping professors start up companies with funding, while simultaneously charging those companies to license the tech. This is also true for National labs run by universities (Berkeley and a few others). the other labs run under contract by external for-profit companies.
worldsayshi 16 hours ago [-]
> What is the alternative?

Reasonably there should be a two way exchange? It might be okay for companies to piggyback on research funds if that also means that more research insight enters public knowledge.

rapind 15 hours ago [-]
I’d be happy if they just paid their fair share of tax and stopped acting like they were self-made when they really just piggybacked on public funds and research.

There’s zero acknowledgment or appreciation of public infra and research.

dr-detroit 14 hours ago [-]
[dead]
delusional 16 hours ago [-]
> As long as the process and tech transfer agreements are fair and transparent

I think that's the crux of the guy you're responding to's point. He does not believe it will be done fairly and transparently, because these AI corporations will have broad control over the technology.

hdivider 16 hours ago [-]
If so, yes indeed, fair point by him/her. It's up to ordinary folks like us to push against unfair tech transfer because yes, federal labs and research institutions would otherwise provide the incumbents an extreme advantage.

Having been in this world though, I didn't see a reluctance in federal labs to work with capable entrepreneurs with companies at any level of scale. From startup to OpenAI to defense primes, they're open to all. So part of the challenge here is simply engaging capable entrepreneurs to go license tech from federal labs, and go create competitors for the greedy VC-funded or defense prime incumbents.

cogman10 16 hours ago [-]
> I didn't see a reluctance in federal labs to work with capable entrepreneurs

My reluctance is when we talk about fraud, waste, and corruptions in government, this is where it happens.

The DoD's budget isn't $1T because they are spending $900B on the troops. It's $1T because $900B of that ends up in the hands of the likes of Lockhead martin and Raytheon to build equipment we don't need.

I frankly do not trust "entrepreneurs" to not be greedy pigs willing to 100x the cost of anything and everything. There are nearly no checks in place to stop that from happening.

lovich 16 hours ago [-]
Not that it fully takes away from your argument but a lot of that high price tag is also due to requiring much better controls on material to prevent supply chain attacks ala getting beepers with explosives in the hands of all your leadership
GolfPopper 7 hours ago [-]
Yet that's the exact opposite of what's been done with something like the F-35[1], with widely distributed production, typically among countries seen as US allies (at least prior to this year), but with key components still made in China.[2] And the problem is even worse in the larger defense industry.[3] Americans pay an immense premium for a military-industrial complex where the PR is largely divorced from reality; for example the USS Gerald R. Ford, commissioned in 2017 still isn't combat ready.[4]

1.https://dsm.forecastinternational.com/2025/04/18/tariffs-tru... 2. https://www.xatakaon.com/materials/u-s-f-35-fighter-jets-and... 3. https://www.forbes.com/sites/davidjeans/2025/04/16/silicon-v... 4. https://www.19fortyfive.com/2025/03/the-navys-ford-class-120...

cogman10 14 hours ago [-]
All the more reason to bring such initiatives inhouse and not outsource them.

You can hope that a defense company is doing the right things in terms of supply chain attacks, but that's a pretty lucrative corner to cut. They'd not even need to cut it all the time to reap benefits.

The only other alternative is frequent audits of the defense company which is expensive and wouldn't necessarily solve the problem.

BurningFrog 16 hours ago [-]
R&D results should be buried under a crystal obelisk at the bottom of the ocean, to warn to future generations.
quantified 17 hours ago [-]
WILL end in capture. Profit demands it.
hbartab 16 hours ago [-]
Indeed.
hahajk 13 hours ago [-]
In the case of huge frontier LLMs, the public labs will likely never be able to compete. In my experience, govt orgs are ardent rule-followers and wouldn't be as willing to violate copyright.
godelski 14 hours ago [-]
There's a risk but there's also great reward if it is done properly. The only way to maximize utility of any individual player is to play cooperatively[0]. A single actor might get a momentary advantage by defecting from cooperation, but it decreases their total eventual rewards and frankly it quickly becomes a net negative in many cases.

That said, I'm not very confident such a situation would happen in reality. I'm not confident current industry leaders can see past a quarter and nearly certain they can't see past 4. Current behavior already indicates that they are unwilling to maximize their own profits. A rising tide lifts all ships, but many will forgo the benefit this gives them to set out to explore for new and greater riches and instead are only able to envy the other ships rising with them. It can be easy to lose sight of what you have if you are too busy looking at others.

[0] Simplified example illustrated by Iterative Prisoner's Dilemma: https://www.youtube.com/watch?v=Ur3Vf_ibHD0

[0.1] Can explain more if needed but I don't think this is hard to understand.

KolibriFly 6 hours ago [-]
The optimist in me hopes we'll eventually reach some equilibrium where collaboration wins out
catigula 13 hours ago [-]
What's the "reward"?

I want to interrogate AI optimist type people because even if AI is completely safe and harmless I literally see only downsides.

Is your perception that living in theorized extreme comfort correlates to "reward"?

christophilus 13 hours ago [-]
You really see only downsides? I’m no AI optimist, but it is a useful tool, and it’s here to stay.
catigula 13 hours ago [-]
Yes, I only see downsides.

It's mostly because the actual stated and actualized goal of real AI is clearly bad.

It's like if you approached me and said "I'm going to erase meaning from your life. You will want for nothing. The entire paradigm of reality will be changed: your entire evolutionary struggle, made meaningless. You will want for nothing: "be" nothing. Also this might potentially kill you or some rich person or more likely nation-state could enslave you."

The actual stated goals seem negative to me. I'm not very interested in erasing and surpassing human achievement in every way. It seems inherently bad. I don't think that's an absurd perspective.

I think the disconnect here is asked in "what is the purpose of life" and I don't think any reasonable interpretation of that is "be obscenely comfortable".

KolibriFly 6 hours ago [-]
Once public research depends on private infrastructure, the balance of power shifts fast
shagie 15 hours ago [-]
The point of https://www.nrel.gov/index is to research how to do renewable energy. Likewise, the research done by https://www.nrel.gov/hpc/about-hpc and its data center https://www.nrel.gov/computational-science/hpc-data-center is to pioneer ways to reuse its waste heat (and better cool existing data centers).

I'm kind of disappointed that their dashboard has been moved or offline or something for the past few years. https://b2510207.smushcdn.com/2510207/wp-content/uploads/202... is what it used to look like.

abtinf 12 hours ago [-]
I would prefer those with commercial interests pursue technological development than those nationalistic and weapons development interests.
mdhb 17 hours ago [-]
This is literally THE scam Elon, Thiel, Sacks and others are running as they gut the government.

Sell assets like government real estate to themselves at super cheap rates and then set up as many dependencies as they can where the government has to buy services from them because they have nowhere else to turn.

To give an example this missile dome bullshit they are talking about building which is a terrible idea for a bunch of reasons.. but there is talks at the moment of having this run by a private company who will sell it as a subscription service. So in this scenario the US military can’t actually fire the missiles without the explicit permission of a private company.

This AI thing is the same scam.

FilosofumRex 13 hours ago [-]
Right on target, publicly funded research always ends up in the hands of private profiteers via private university labs.

If LLM/AI is critical to national security, then it should be funded solely via the Dep of Defense budget, with no IP or copy right derivatives allowed.

mindslight 10 hours ago [-]
The political platform for rebuilding our country after the destructionists are deposed (which will happen sooner or later) needs to include scrutinizing every single one of these sales and/or long term contracts, and outright invalidating many of them as the fraudulent conveyances that they are. No just accepting the nonsense of "oh that's private property now, nothing can be done". If the precedent discourages "investment" (aka looting) of government institutions in the future, that's a good thing.
inquirerGeneral 15 hours ago [-]
[dead]
tantalor 17 hours ago [-]
I was a bit puzzled what "1663" is. Here's what I found:

> The Lab's science and technology digital magazine presents the most significant research initiatives and accomplishments from national-security-related programs as well as projects that advance the frontiers of basic science. Our name is an homage to the Lab's historic role in the nation's service: During World War II, all that the outside world knew of the top-secret laboratory was the mailing address - P.O. Box 1663, Santa Fe, New Mexico.

https://researchlibrary.lanl.gov/about-the-library/publicati...

Zorass 4 hours ago [-]
Whether AI is a “good programmer” really depends on what you mean by programming. If it means being fluent in syntax, quickly generating prototypes, and recalling large amounts of code patterns, then yes, it's surprisingly strong. But if being a programmer includes debugging intuition, tracking context over multiple sessions, and knowing when not to write code, it's still not there yet.
52 minutes ago [-]
senderista 16 hours ago [-]
Clearly AI is worthy of public investment, but given the capture of this administration by tech interests, how can we be sure that public AI funding isn't just handouts to the president's cronies?
candiddevmike 15 hours ago [-]
How about we fix global warming and switch 100% to clean energy, and then invest in AI?
ben_w 15 hours ago [-]
To the extent that further improvements to AI remain economically useful, "let's do these other things first" means your economy trails behind those of whoever did work on the AI.

To the extent that further improvements to AI are either snake oil or just hard to monopolise on, doing everything else first is of course the best idea.

Even though I'm more on the side of finding these things impressive, it's not at all clear to me that the people funding their development will be able to monopolise the return on the investment - https://en.wikipedia.org/wiki/Egg_of_Columbus etc.

Also: the way the biggest enthusiasts are talking about the sectoral growth and corresponding electrical power requirements… well, I agree with the maths for the power if I assume the growth, but they're economically unrealistic on the timescales they talk about, and that's despite that renewables are the fastest %-per-year-growth power sector and could plausibly double global electrical production by the early 2030s.

haswell 15 hours ago [-]
> To the extent that further improvements to AI remain economically useful, "let's do these other things first" means your economy trails behind those of whoever did work on the AI.

The major question is: at what point will unaddressed climate change nullify these economic gains and make the fact that anyone worried about them feel silly in retrospect?

Or put another way, will we even have the chance collectively enjoy the benefits of that work?

TeMPOraL 14 hours ago [-]
Still, another two big questions are:

- To what extent AI will actually be helpful in solving the climate crisis?

- To what extent the power generation growth fueled by AI will be critical to solving the climate crisis, and conversely, how badly we'll be screwed without it?

"Degrowth" is not an option. It hasn't been for a long time now. We can't just "social solution" our way out this problem anymore.

Retric 14 hours ago [-]
We can very quickly and reasonably cheaply ‘solve’ global warming without AI. Wind, solar, batteries + existing hydro and nuclear can hit a 95% clean electricity grid with capacity to scale in EV’s and heat pumps.

Electricity + home heating + cars is not 100%, but cutting emissions in half means you double the time before reaching any given threshold. For many problems the last 10% is the most challenging but we get 10x as long to solve it and 10x as long to implement mitigation strategies.

That’s what makes climate change critical, the next year is more important than what happens 10 or 20 years from now.

daedrdev 13 hours ago [-]
Electricity is only part of energy use and emissions production. In no way is there a simple and cheap solution, all proposals have tradeoffs and costs
Retric 12 hours ago [-]
Electricity can replace the need for gasoline, diesel, natural gas, fuel oil, coal etc across a wide range of applications while saving money. Thus the talk of heat pumps and EV’s in that post. Which then further reduces the need to extract, transport, and refine oil, natural gas, and coal further reducing methane from leaking pipes etc.

However, doing so requires ramping up electricity production and storage.

drdrey 13 hours ago [-]
electricity generation is only ~20% of emissions
Retric 12 hours ago [-]
Replacing gasoline and diesel use in cars and trucks removes 10% directly at the tailpipe and another 3-5% indirectly. But EV’s need electricity thus clean electricity can offset more than its current 20% share. The actual calculation depends on unknown factors like how we ramp down oil production, ethanol use, and how refineries adjust to changing demands.

Add heating for buildings + hot water + industrial processes that can use electricity instead of fossil fuels alongside indirect effects like methane leaks from pipelines and drumroll.

We can get to ~50% reduction while saving money over using fossil fuels.

haswell 13 hours ago [-]
I think those are the right questions to be asking, with emphasis on the fact that these remain questions.

Too many AI accelerationists are treating these questions as foregone conclusions, which seems like an enormously dangerous bet without a clearer pathway to a truly beneficial outcome.

It may very well be that some form of AI (which form? hard to say - probably not LLMs) are a part of the solution. It may just as well be that they are not. But when building software, the age old advice to “start with the problem, not the solution” comes to mind.

The number of engineers I’ve worked with over the years (I’ve been one of them) who are convinced that the feature they’re building is the right thing to build, only to realize later that it doesn’t solve a problem people care about…is a very large number.

Regarding degrowth, I’m not advocating for it. With that said, that will be the unwanted result forced on us by the realities of the environment if we can’t put a lid on the climate issue.

ngangaga 12 hours ago [-]
> "Degrowth" is not an option.

This is not helpful. There are many reasons degrowth won't generally help humanity, but the benefits are particularly aplicable to western nations and their diplomatic relations. Certainly many western nations can bear degrowth without significant loss in quality of life. The wealthy just gotta take a significant cut to their waistlines.

> We can't just "social solution" our way out this problem anymore.

This certainly seems to be the liberal solution. Short of evicting them from our society what better choices do we have?

temp0826 12 hours ago [-]
The fastest way to solve the climate crisis is to DESTROY ALL HUMANS
grey-area 14 hours ago [-]
Is generative AI economically useful? More economically useful than switching to renewable energy?
rudolph9 14 hours ago [-]
There are notable similarities between a wartime economy and one continually adapting to global warming. While perhaps not sustainable long-term, we may observe short-term economic growth driven by government spending, followed by extended inflationary periods. We might currently be experiencing the first cycle of this kind, with more likely to follow.
ngangaga 15 hours ago [-]
Well yes, nationalism will be the dagger in the heart of humanity. But AI won't do anything to address this; in fact, leaning into the concept of competing rather than cooperating economies will accelerate pushing the dagger in.
CooCooCaCha 13 hours ago [-]
That’s why I wonder if a planetary government is inevitable sometime in the future. We can’t address species-wide issues if we’re constantly worried about competition, and if market forces aren’t going to work then the only other solution I can think of is a bigger, more powerful entity laying down the law.
dale_glass 15 hours ago [-]
Who "we"?

The people qualified to fix global warming aren't the same people qualified to work on ML.

XorNot 14 hours ago [-]
Don't you know? Humanity can only solve one problem at a time in order of importance.

And it's corollary: something being in the news or social media means everyone else has stopped working on other problems and is now solely working on whatever that headline's words say.

threeseed 13 hours ago [-]
Yes they are.

I've worked with hundreds of Data Scientists and every one had the ability to work on different problem areas. And so they were working on models to optimise truck rollouts or when to increase compute for complex jobs.

If as a society we placed an increased emphasis on efficiency and power consumption we would see a lot more models being used in those areas.

michaelt 13 hours ago [-]
Seems to me, we know pretty well what we need to do about climate change. It has already been data scienced many, many times.

The problem is we don't want to do it.

85392_school 15 hours ago [-]
You'd probably meet the talking point that if we don't accelerate AI development China will win.
15 hours ago [-]
15 hours ago [-]
bcoates 15 hours ago [-]
1. Build atomic power plants sufficient to supply electricity needs for projected future AI megaprojects

2. Inevitable AI winter

3. Keep running the plants, clean energy achieved, stop burning coal, global warming solved

GolfPopper 7 hours ago [-]
The problem is that's far from a guaranteed path. What if, instead we get:

1. Plan to build atomic power plants sufficient to supply electricity needs for projected future AI megaprojects.

2. Build the AI megaprojects first, because of fallacies related to sunk costs, urgency, and unwillingness to let the pyramid scheme collapse.

3. Never complete the nuclear plants, never get the pot of gold at the end of the AI rainbow, and pump a couple hundred million tons of CO2 into the atmosphere in the mean time.

15 hours ago [-]
whatever1 14 hours ago [-]
This is the plan. Build all the clean infrastructure with the fake promise of AI and once the bubble bursts, boom. We have spare clean capacity for everyone.
_heimdall 14 hours ago [-]
We can't just switch to clean energy, we would need to drastically reduce our energy use per capita.
dlivingston 13 hours ago [-]
Absolutely not. We would be moving backwards as a society. Increased energy usage is a bellwether of societal advancement. See the Kardashev scale and Dyson sphere for example.

[0]: https://en.wikipedia.org/wiki/Kardashev_scale

[1]: https://en.wikipedia.org/wiki/Dyson_sphere

_heimdall 11 hours ago [-]
How do you propose we supplant all of our current energy use with green energy? What are the sources that work at scale, how do we store energy at scale assuming the green sources aren't 24/7 reliable, and how do we scale said systems without a net impact on the environment?

Our current use of electricity per capita, at least in the west, is ridiculously unsustainable if we aren't willing to keep abusing fossil fuels as an efficient store and source of energy.

I'm a for reducing our impact, but we have to stop kidding ourselves with the pipe dream that we just need the right source of energy to be able to grow our energy use per capita indefinitely.

BrawnyBadger53 13 hours ago [-]
I don't think we should be setting artificial limiters on energy usage or anything but you're using circular reasoning as justification.

These scales use energy usage as the measure of progress which you are saying is proof that reducing energy use is reducing progress.

threeseed 13 hours ago [-]
Which is actually a problem AI is perfect for.
_heimdall 11 hours ago [-]
How so? LLMs will just guess at what sounds like a likely human response to the problem.
threeseed 10 hours ago [-]
Far more to AI than just LLMs. I am talking about more traditional models.
_heimdall 9 hours ago [-]
Which ones? Machine learning is a larger field, but artificial intelligence is different entirely.
engineer_22 14 hours ago [-]
Let's also cure cancer and stop all wars while we're at it.
GolfPopper 7 hours ago [-]
You friendly megachurch pastor, I mean TV psychic, I mean AI that will be coming real soon will do that for you! Just send money now.
madaxe_again 14 hours ago [-]
Don’t forget world hunger.

I don’t understand this line of reasoning - it’s like saying “you’re not allowed steam engines until you drain all of your mines”. It’s moralistic, rather than pragmatic.

somenameforme 11 hours ago [-]
It's just a passive aggressive means of disagreeing with the value of something without providing any argument for such. The exact same rhetorical fallacies are widespread when it comes to debates around the values of e.g. pursuing efforts to become a multiplanetary species.

I think it's safe to call it a rhetorical fallacy because its underlying premise implies humanity can somehow only focus on a very finite number of things at once and that funds directed to 'your thing' would somehow otherwise be directed to 'their thing' which is even more absurd.

threeseed 13 hours ago [-]
There is no one cancer but we are working to cure as many variations as we can.
godelski 15 hours ago [-]
This isn't an "either or" situation. You can do both.

The absolute dollar value might seem high, because we're working with the budget of not just a country but the wealthiest country, but as a percentage it is quite low. You can certainly pull funds from other areas too, like the military, which also greatly benefit from such research endeavors.

Even if these were exclusively non-weapons and non-military based technologies being developed it'd be naive to not recognize that the best military defense is to stop a war before it happens. That comes through many avenues, but notably the development of new technologies, especially those that benefit people as a whole (e.g. medicine or see the Taiwan strategy). But even then, it would also be naive to think that the same technology couldn't be adapted to military uses. Anything can be a weapon if you use it wrong enough.

But note that we're also seeing a reduction in federal research funding. We're also seeing less fundamental research and these types of problems need a strong pipeline through the classic TRL scale[0]. I think you'll find some of that discussion over in yesterday's thread about Bell Labs. The pipeline doesn't work if you don't take risks and are rushing. You need a fire under your ass but too hot and you get burned. It's easy to be myopic in today's settings, and that's not a great idea for an organization who needs to have an outlook in terms of decades and centuries (i.e. government) as opposed to the next election cycle or next quarterly earnings report.

Mind you, we've done these things before. Both the Space Race and Manhattan Project. At the height of the Space Race NASA's budget was over 4.41% of the federal budget[2]. I'm not sure what percent the Manhattan Project's budget was, but it is very clear that this is A LOT cheaper than what actual war costs[3]. We're talking about less than a month of war costs. Remember, we spent over a $750bn over in Iraq[4]. The question is not if we have the money, but what we want to spend it on. Personally I'd rather stuff like this than bombing people. Frankly, you can eat the cake too, as it makes it cheaper to bomb people as well...

[0] https://en.wikipedia.org/wiki/Technology_readiness_level

[1] https://news.ycombinator.com/item?id=43957010

[2] https://en.wikipedia.org/wiki/Budget_of_NASA

[3] https://en.wikipedia.org/wiki/Manhattan_Project#Cost

[4] https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War

neves 15 hours ago [-]
What's the Taiwan strategy?
godelski 15 hours ago [-]
https://www.tsmc.com/
bobxmax 13 hours ago [-]
It's going to be fun when China rules the world because western democracies have gone so batshit anti-tech
kgwxd 14 hours ago [-]
AI might be faster path to solving those issues.
_heimdall 14 hours ago [-]
AI doesn't yet exist as far ad we know, and LLMs will only be helpful at generating a massive amount of guesses at how a human may describe the solution.

At best its a needle in a haystack approach, and one that seems to toss out methodical, reasoned science in favor of a blunderbuss.

gosub100 14 hours ago [-]
what's wrong with global warming? If you're asking me to put animals' lives before my own, I say "no thanks".
dr-detroit 14 hours ago [-]
[dead]
babyent 15 hours ago [-]
I think climate change is legit. But I also think that a lot of it is just a mechanism to knee cap other countries from progressing. I don’t mind, because it keeps us at the top.
gosub100 14 hours ago [-]
[flagged]
conradev 16 hours ago [-]
The DOE has been building supercomputers for a while now: https://en.m.wikipedia.org/wiki/Oak_Ridge_Leadership_Computi...
godelski 14 hours ago [-]
Even more importantly, they are GPU based. The US has 3 exascale computers (out of 3 in the world). I should stress that these measurements are based on LINPACK, and are at fp64 precision. This is quite a different measurement than others might be thinking of with recent announcements in AI (which are fp8)

https://www.top500.org/lists/top500/2024/11/

giardini 14 hours ago [-]
LLMs seem to be plateauing. I'd rather let the markets chase AI.
swalsh 14 hours ago [-]
How do you make that assessment? I'll admit, the knowledge base is not 10x every few months anymore, but the agent capabilities keep getting better. The newer models can do a lot of useful work accurately for a while. That wasn't true several months ago.
overgard 13 hours ago [-]
Wake me up when they solve hallucination.
therealpygon 14 hours ago [-]
LLMs, maybe. AI? Hardly.
apwell23 14 hours ago [-]
>AI? Hardly.

what are some examples of 'hardly' ?

falcor84 13 hours ago [-]
There are a ton of big examples of advances across all subfields of AI over the last year, but the first one that came to my mind was AlphaGeometry2 [0], which combines Gemini with a symbolic agent to solve (offer proofs for) the vast majority of IMO geometry problems. I believe that this sort of approach can be applied to many other types of problems.

[0] https://arxiv.org/abs/2502.03544

voidspark 13 hours ago [-]
"LLM" is not mentioned anywhere in the article.
nyarlathotep_ 13 hours ago [-]
There's a serious issue around naming here, I'll agree.

I assume "AI" in contemporary articles, especially as it pertains to investments, means "Generative AI, especially or exclusively LLMs."

voidspark 13 hours ago [-]
In this case it means research, as it pertains to universities.
voidspark 13 hours ago [-]
The article explains that the lab would support universities by providing infrastructure.
b59831 15 hours ago [-]
[dead]
woah 14 hours ago [-]
HN commenters in 1960:

> Clearly computer networking is worthy of public investment, but given the capture of this administration by military industrial interests, how can we be sure that public networking funding isn't just handouts to the president's cronies?

myhf 13 hours ago [-]
There was literally a vaporware "AI" hype cycle in 1960. Propositional logic programming was poison to investors for 50 years because of that one, just like LLMs will be poison to investors for 50 years because of this one.
dekhn 13 hours ago [-]
Check out the history of BBN, who was deeply involved in the creation of the modern internet. There was an open revolving door between BBN employees and granting agencies, and BBN was even charged with contract fraud by the government . It's owned by Raytheon- a classic defense company.

Our country's tight relationship between the government, military, academia, and industrial has paid off repeatedly, even if it has some graft.

newfocogi 16 hours ago [-]
Another recent AI article out of LANL: https://www.lanl.gov/media/publications/1663/1269-earl-lawre...

And discussed on HN: https://news.ycombinator.com/item?id=43765207

This does feel like a step change in the rate at which modern AI technologies and programs are being pushed out in their PR.

gitroom 2 hours ago [-]
Man, reading all this makes me glad I just write my own scripts when I get stuckAI's cool til it isn't.
zkmon 17 hours ago [-]
I like how he says that AI is a general-purpose technology like electricity.
KolibriFly 6 hours ago [-]
Whether it's hype or not, treating AI as a general-purpose tech isn't that wild when it's already touching code, writing, design, logistics, education (you name it)
kristel100 4 hours ago [-]
What stood out to me here was the point about AI as a “force multiplier” rather than a solver. It’s not about replacing scientists but scaling their attention. More people in science need to hear that.
andy99 17 hours ago [-]
The real title is "Q&A with Jason Pruet"
Quenby 11 hours ago [-]
We often hear about AI being used in healthcare or finance, but seeing it applied to something like nuclear systems—where there's zero room for failure—really puts the weight of this technology into perspective.

What stood out to me was how cautious they are. It’s not about letting AI make decisions, but about spotting potential issues earlier so humans can step in sooner.It’s not AI making the call, it’s more like helping the call happen earlier. And I really respect that approach.

paradox460 12 hours ago [-]
Fwiw, LANL saw some of it's heaviest layoffs this year, even heavier than those that happened under Nanos in the post Cerro Grande investigation. From what I gather, the feeling up on the hilltop is one of anxiety
LAsteNERD 17 hours ago [-]
PR in here for sure, but some smart context on the scientific and nat security potentional the DOE and National Labs see in AI.
quakeguy 15 hours ago [-]
They should invest in natural intelligence first.
lp251 17 hours ago [-]
wonder if they still train all of their models using Mathematica because it was impossible to get pytorch on the classified systems
pphysch 17 hours ago [-]
AFAIK that was mostly due to a silly detail about MD5 hashing being restricted on FIPS compliant systems? Or something like that. I'm pretty sure there's an easy workaround(s).
lp251 17 hours ago [-]
there were a bunch of reasons. couldn’t bring compiled binaries onto the red, so you had to bring the source + all deps onto a machine with no external internet.

it was unpleasant.

candiddevmike 16 hours ago [-]
Just have Hegseth run PyTorch for them
levocardia 13 hours ago [-]
>pip install *
KolibriFly 6 hours ago [-]
Feels like we're either on the verge of a renaissance or a breakdown, and no one really knows which way it'll tilt
stonogo 16 hours ago [-]
The actual reason is "because they're being told to." Before that, there was a massive public-cloud push DOE-wide. Nobody outside of ASCR is interested in computing, and there's a lot of money to be made if you can snag an eternal rent check for hosting federal infrastructure.
15 hours ago [-]
bawana 14 hours ago [-]
When will we get AI CEOs? I am tired of seeing corporate decisions driven by quarterly bonuses.
falcor84 13 hours ago [-]
How is this related to TFA which focuses on national (public) labs?

In any case, if you want to take a stab at it, feel free to go ahead and start a company whereby you as the owner would assign the CEO responsibilities to an AI.

th0ma5 10 hours ago [-]
No they probably meant existing companies... when are AI companies going to get rid of their CEOs?
falcor84 3 hours ago [-]
Well, if you were to ask me, I would say that the biggest bottle-neck is the productive time horizon for AIs, which is currently at about 1h before SoTA agents go off-rails, based on the METR report[0]. As the report models, this time horizon doubles approximately every 7 months (R² = 0.98 on a log-scale), and thus if we assume that a CEO's work requires a time horizon of e.g. 10,000 hours to effectively plan and implement a long-term strategy, then we could expect companies to be able to replace CEOs by around 2033. There are of course other factors at play, but knowing companies' propensity to move fast and break things, I would indeed put some money that by the end of 2033, we'll see at least one mid-sized tech company delegate most (if not all) of its CEO decision-making to an AI agent.

[0] https://arxiv.org/pdf/2503.14499

zzzeek 14 hours ago [-]
> Over the last two years, we’ve more or less run out of benchmarks where AI isn’t better than humans.

this whole "benchmarks" thing is laughable. I've been using Gemini all week to do code assist, review patches, etc. Such impressive text, lists of bullets, suggestions, etc., but then at the same time it makes tons of mistakes, which you then call it on, and it predictably is like "oh sorry! of course!" yes of COURSE. because all it does is guess what word is most likely to come after the previous word. is there a "benchmark" for "doesn't hallucinate made up BS?" because humans can do very well on such a benchmark.

Zoethink 5 hours ago [-]
[dead]
ArthurStacks 14 hours ago [-]
[flagged]