Pet Peeves

It’s getting hard to wake up every day, read the latest news of the slaughter of civilians in Gaza and the plans to finish off or exile the rest, then go through the two ID checks at the campus gate designed to make sure that no protests about this happen on campus, and when I get to my office resist the temptation to write a rant. But no one wants to read this, and it would probably violate the new rules we’re now living under here. So, I’ll complain instead about some pet peeves about theoretical particle physics.

This week there is the newest edition of a Pre-SUSY School in Santa Cruz, designed to train graduate students and postdocs. My first pet peeve is the whole concept of the thing. It starts off with an Introduction to Supersymmetry which introduces the MSSM, but why is anyone training graduate students and postdocs to work on supersymmetric extensions of the Standard Model? These were a failed idea pre-LHC (see my book…), and the LHC results conclusively confirm that failure.

The Introduction to Supersymmetry lectures given by Ben Allanach are an updated version of similar lectures given at other summer schools designed to train people in SUSY. These lectures trigger several of my pet peeves even before they get to SUSY. I’ve written about some of this before in detail, see here.

The first pet peeve is about the insistence on using the same notation for a Lie group and its Lie algebra. In both versions of the lecture notes, we’re told that
$$SO(1,3)\cong SL(2,\mathbf C)$$
and
$$SO(1,3) \cong SU(2) \times SU(2)$$
There are lots of problems with this. In the first case this is about the group $SO(1,3)$. In the next it’s about the Lie algebra $SO(1,3)$, but the same symbol is being used for both. One would guess that $\cong$ means two things are isomorphic, but that’s not true in either case.

More completely, in the older version of the notes, we’re told

there is a homeomorphism (not an isomorphism)
$$SO(1,3)\cong SL(2,\mathbf C)$$

“Homeomorphism” is nonsense, which has been fixed in the newer version to

there is a homomorphism (not an isomorphism)
$$SO(1,3)\cong SL(2,\mathbf C)$$

There’s still the problem of why a homomorphism that isn’t a isomorphism is getting written as $\cong$. The text does later explain what is really going on (there’s a 2-1 Lie group homomorphism from $SL(2,\mathbf C)$ to $SO(1,3)$).

The other equation is more completely given as

locally (i.e. in terms of the algebra), we have a correspondence
$$SO(1,3) \cong SU(2) \times SU(2)$$

The “locally (i.e. in terms of the algebra)” does help with the fact that the symbol $SO(1,3)$ means something different here, that it’s the Lie algebra of $SO(1,3)$ not the Lie group $SO(1,3)$ of the other equation. The word “correspondence” gives a hint that $\cong$ doesn’t mean “isomorphism”, but doesn’t tell you what it does mean.

A minor pet peeve here is calling the Lie algebra of a Lie group its “algebra”, dropping the “Lie”. For any group, its “group algebra” is something completely different (the algebra of functions on the group with the convolution product). Mostly when mathematicians talk about “algebras” they mean associative algebras, and a Lie algebra is not associative. Why drop the “Lie”?

What’s really true (as explained here) is that the Lie algebra of $SO(1,3)$ and the Lie algebra of $SU(2)\times SU(2)$ are different real Lie algebras with the same complexification (the Lie algebra of $SL(2,\mathbf C)\times SL(2,\mathbf C)$). In the earlier version of the notes there’s nothing about this. There’s the usual definition of two complex linear combinations
$$A_i=\frac{1}{2}(J_i +iK_i),\ \ B_i=\frac{1}{2}(J_i -iK_i)$$
of basis elements $J_i$ and $K_i$ of the Lie algebra of $SO(1,3)$, giving two separate copies of the Lie algebra of $SU(2)$. All we’re told there is that “these linear combinations are neither hermitian not anti-hermitian”.

In the newer version, this has been changed to describe these linear combinations as “hermitian linear combinations”. We’re told

The matrices representing both $J_i$ and $K_i$ have elements that are pure imaginary. (2.2) then implies that
$$(A_i)^∗ = −B_i$$
which is what discriminates $SO(4)$ from $SO(1, 3)$.

which I don’t really understand. Part of the source of the confusion here is confusion between Lie algebra elements (which don’t have a notion of Hermitian adjoint) and Lie algebra representation matrices for a unitary representation on a complex vector space (which do). Here there are different defining representations involved (spin for $SU(2)$ and vector for $SO(1,3)$).

There’s then a confusing version of the correct “$SO(1,3)$ and the Lie algebra of $SU(2)\times SU(2)$ are different real algebras with the same complexification”

the Lie algebra of $SO(1, 3)$ only contains two mutually commuting copies of the real Lie algebra of $SU(2)$ after a suitable complexification because only certain complex linear combinations of the Lie algebra of $SU(2) \times SU(2)$ are isomorphic to the Lie algebra of SO(1, 3).

Here’s an idea for a summer school for physics theory grad students and postdocs: teach them properly about $SO(3,1)$, $SO(4)$, their spin double covers, Lie algebras, complexifications of their Lie algebras and their representations. About SUSY extensions of the SM, just tell them these are a failure they should ignore (other than as a lesson for what not to do in the future).

Update: I strongly recommend Sabine Hossenfelder’s latest video, Scientific research has big problems, and it’s getting worse. She’s been attacked over the years for this kind of critique, most recently as “a disgusting fraud peddling propaganda for fascist oligarchs”, but it’s a very important one that deserves to be taken seriously. In the video she starts out by pointing to a huge problem with scientific research that is getting much, much worse very fast: paper mills and bogus papers, a problem now being turbocharged by AI.

SUSY research is I think one of the things that has, for good reason, motivated her critique. Why is there a huge still active field of people writing papers about a failed idea? What are the incentives and sociology that create this sort of phenomenon? The topic of this posting explains where I very much disagree with Hossenfelder. She likes to name the problem in fundamental physics as “Mathematical Fiction” (quotes others as describing the problem as “Mathematical Gymnastics” or “Mathematical Cosmology”). But looking at the training of SUSY researchers here, the problem is not too much mathematics, but too little. Too many physicists firmly believe that understanding the basic details of what they are doing is a waste of time, that mathematician’s insistence on clear, unambiguous and precise statements is nothing but pedantry. But if you have only a hazy idea of what the fundamental objects in your theory are and how they behave, absent the discipline of experimental tests, you have no hope of distinguishing what works from what doesn’t. SUSY research is an extreme case, where even failed experimental tests only slow the enterprise down, don’t stop it.

Given what is happening in the US, it is important to make clear the sort of reevaluation of federal support of science that Hossenfelder’s critiques implies is needed. Such a reevaluation would require a strong dedication to distinguishing truth from lies. The current defunding of science at US research universities based on pro-genocide fanaticism and a mountain of lies about “antisemitism” is the opposite of what is needed.

Update: The problem with theorists being totally confused about Lie groups vs. Lie algebras and the symmetry groups of spin in 3 and 4 dimensions is not just in the SUSY subfield. For another example, take a look at Appendix A here which starts off with a definition of the SU(2) group that is half a definition of the Lie algebra (up to i, self-adjoint matrices), half a definition of the group (det=1). Things go downhill from there in the rest of the section. Why would anyone write this “pedagogical” discussion when they didn’t understand this at all? Why did none of the five co-authors or a referee notice that this section was complete nonsense?

Update: For the latest on SUSY claims see here. The same story as at any time for the past forty years: no superpartners = no problem, since SUSY “predicts” superpartners just beyond the reach of current and past searches. Nowadays this is accomplished by invoking the landscape and “string naturalness”.

This entry was posted in Uncategorized. Bookmark the permalink.

43 Responses to Pet Peeves

  1. Andrew says:

    > These were a failed idea pre-LHC (see my book…), and the LHC results conclusively confirm that failure.

    I don’t think that’s true. A fairer statement might be something like

    > I didn’t like these models pre-LHC (though lots of other people did) and they are less plausible than they were in light of the LHC results. Some of the simplest models are in danger of being ruled out, if one assumes that they account for all the dark matter and assume standard cosmology

  2. Amitabh Lath says:

    Hi Peter, I cannot comment on the group theoretical aspect of SUSY, but for experimentalists SUSY simulations are so ubiquitous that no matter what search you are doing, say leptons plus missing momenta or resonances with jets or taus, whatever it is there is a SUSY signature you can use as a model to calculate acceptances. Publishing results as constraints on SUSY masses or couplings like tan(beta) etc. is an easy way to summarize results so that the phenomenology community can digest them.

  3. Peter Woit says:

    Andrew,
    You and the SUSY tribe are really deep in epistemic collapse. The problems with SUSY have never been that some people “don’t like it”. This is not all about vibes, there really is a truth to the matter.

    Also on the issue of truth “Some of the simplest models are in danger of being ruled out” is a lie. “All of the simplest models have long ago been ruled out” is a truth.

    The remarkable thing about the summer school lectures that I pointed to is that the epistemic collapse of the SUSY tribe is so bad that its members don’t understand basic facts about their subject and are unable to train new members of the tribe in what is a fact and what isn’t.

  4. Peter Woit says:

    Amit,
    A positive thing to say about SUSY models is that they are so complicated and there is such a large parameter space that experimental searches for them are going to be sensitive to a wide variety of possible failures of the SM.

    Experimentalists do need to keep in mind that SUSY and most other BSM theory models are not well-motivated, so of limited value in pointing out what to look for.

    Something about SUSY that is little discussed is that it’s very confusing how Wick rotation works. That these introductory lectures are already confused about the relation between relativistic spin in Euclidean and Minkowski signature, even before they get to SUSY, is an indication of the problem.

    My suspicion has always been that there’s something very interesting you would learn if you could properly understand this problem. Maybe what is really relevant to reality is some sort of “twisted” SUSY mistakenly thought to be unphysical (not that I have a serious proposal for this). I hope LHC experimentalists are doing what they can to measure spin polarization effects, maybe something very unexpected is happening there.

  5. Andrew says:

    Peter,

    What do you mean when you say that a model is ruled out? Can you give an example? I think you must mean something different to me.

    By model, I mean a choice of Lagrangian with unknown parameters , e.g. the cMSSM. By ruled out, I mean that a careful statistical test of the model has rejected all possible choices of unknown parameters at e.g. a 5% error rate (or an analogous statement for other statistical frameworks).

    Performing such tests isn’t easy. This one from some time ago found p ~ 10% for the cMSSM: https://arxiv.org/abs/1508.05951. And it makes assumptions about standard cosmology and dark matter.

    There is indeed a truth to the matter; I suspect the simpler models are ruled out by now if one makes standard assumptions about dark matter. But alas it’s technically and computationally challenging to find out.

    Do you mean that lots of parameter points in some models have been ruled out by LHC searches? Indeed they have, and so the models aren’t as plausible as they once were. However, I don’t think you’re justified in saying that LHC results conclusively show that SUSY was a failure. Can you put that in the language of a statistical test using LHC data? or is it just feels and vibes 😉

  6. Peter Woit says:

    Andrew,
    If you add 105 parameters to the Standard Model, you have something absurdly complicated that can’t be ruled out. It could however be used to generate a huge and worthless industry that has spent 45 years investigating the ugly details of this and could go on forever. Sabine Hossenfelder has apt comments.

    If you say “this model is not a complete joke, it stabilizes the weak scale”, then you have a prediction: weak scale SUSY, i.e superpartners of masses similar to the W and Z. But this prediction was quickly falsified, and we’ve now seen decades of “predictions” of SUSY partner masses moving up and up every time there’s more data. Sorry, but at this point it’s completely obvious this a failure. You can spend your life adding to the thousands and thousands of papers studying this worthless mess, but by any reasonable standard the whole thing failed long ago.

  7. Gavin says:

    The issue with Sabine’s recent videos is that she has started to imply that public funding of science should be replaced with private funding and that tech billionaires are the people who think the most seriously about long term benefits in society.

    [a few time stamps with notable quotes]
    https://youtu.be/htb_n7ok9AU?feature=shared&t=184
    https://youtu.be/htb_n7ok9AU?feature=shared&t=389

    As usual she manages to dance around really taking a stand so it’s hard to pin her down. She uses the strategy of saying [I am paraphrasing her comments in the Epilogue] “I am not saying this is what *should* happen. I am just saying that this is what I think will happen, and why arguments why this would be bad are wrong.” But, I don’t think giving a few caveats in the Epilogue makes up for the fact that she spends almost all of the 22 minute video entirely giving entirely one-sided arguments for why publicly funded academic research fails, and suggests that privately funding research would succeed without subjecting *those* arguments to anywhere near the same level of scrutiny.

    As an example of her usual “hide the point she wants to make” strategy, she says at around 3:20

    “Elon Musk has called academia ‘a bastion of communism that operates on no feedback loop to reality’ and he said entirely correctly ‘that most scientific papers are useless.'”

    In the context of the video I would say she is highlighting Elon Musk’s quote about communism because that it supports the point the video is clearly making (but not saying outright) — we *should* replace public funding of academia with a private funding. But it’s worded in a way where if challenged she can say that she is only quoting Musk and only explicitly agreed on the point that most scientific papers are useless, which is maybe true or maybe not true but is a much less inflammatory point.

    She has always been a contrarian and I appreciate some of her takes even while I disagree with many of them, but to me that really does cross a line from “disagreements about the direction of science” into, well, “a disgusting fraud peddling propaganda for fascist oligarchs.”

  8. Peter Woit says:

    Gavin,
    Sorry, but I’m only willing to discuss what’s in the video I linked to, which seems to me quite sensible. Sabine puts out a huge amount of material, some of it more designed to be provocative than thoughtful and there’s plenty of this I would not agree with.

    But I’ve been the target often enough myself of people who want to discredit one’s serious arguments by taking something one said out of context and claiming that one “implied” something offensive or stupid. Not going to go look at the links you give and play that game here.

  9. Andrew says:

    Peter,

    It’s not as open and shut as that. You’re comments are feels and vibes rather than quantitative science.

    What did SUSY models predict? As you say, that’s difficult to say as SUSY models typically have many unknown parameters and complicated phenomenology. This is indeed why it’s hard to test them and why it’s hard to say that models have been ruled out.

    They did predict a Higgs between about MZ and about 150 GeV. That worked, though 125 is on the heavy side. And assuming no RPV they often predict a dark matter candidate, though it’s getting harder to satisfy null results from direct searches for dark matter and annihilate fast enough to avoid overclosing the Universe, assuming standard cosmology.

    They did not necessarily predict weak-scale superparticles; that all depends on the unknown softbreaking parameters. If the softbreaking scales exceed the weak scale, the soft-breaking and SUSY preserving parameters must be fine-tuned. The fine-tuning required in light of the LHC results does make SUSY models less plausible and less appealing than they once were. However, these changes are challenging to quantify and I don’t know of any result showing that SUSY models are now a failure, whatever that means. I don’t think you’d be able to make a quantitative case that SUSY models were worse than any other solution to the hierarchy problem.

    You appeal for rigour in maths and complain about mathematical ambiguities and errors in Ben’s lecture notes. How about rigour in the way you assess the status of theories?

  10. JimV says:

    I am a longtime reader of, commentator at, and donor to Dr. Hossenfelder’s (written) Back Reaction blog. I have had a very few disagreements with her. One was when I suggested that her writing was good enough that she should publish a book of essays. At that time she felt her scientific work was much more worthwhile and disparaged the idea of spending time on commercial endeavors.

    Obviously that point of view has changed completely. I believe, because she has a family to help support and having been uprooted to several different institutes in different countries in search of research grants, felt that it was no longer a feasible way to live. So I think her problems with public research funding stem from lived experience, at least in her specific field of physics.

    I haven’t watched her videos consistently, preferring written essays, so the evolution of her as a video entrepreneur (as shown in the video cited in this post) has been quite surprising and somewhat impressive to me. Perhaps her feelings toward Elon Musk have softened along with this evolution, but for a while at least Musk was the butt of her jokes. I personally have never seen her give a favorable impression of a tech billionaire (which I would have noticed with some shock), but as I said I don’t watch a lot of her videos.

    This may not make it through moderation but I felt a different perspective than Mr. Gavin’s would add some balance.

  11. Peter Woit says:

    Andrew,
    The problem with the notes is not lack of rigor, but serious confusion about the basics of the subject. Similarly, the problem with continuing to promote the MSSM as a research topic is serious confusion about what science is.

  12. Peter Woit says:

    I’ll leave JimV’s comment as balance to Gavin’s, but will delete any others that aren’t specifically about the linked video. The two comments do show the problem with current “science communication”: serious discussion of issues is completely overwhelmed by this kind of pointless culture wars argumentation about whether someone is “good” or “bad”.

  13. Andrew says:

    Peter,

    On the one hand, the question ‘what is science’ is full of difficulties and nuances and I don’t know what science is. On the other, there is nothing I’ve written that should give anyone the impression that I’m not familiar with the basics of scientific argument or testing of hypotheses.

    In any case, we don’t need to open that can of worms. You’ve made exaggerated claims about SUSY being ruled out and have offered nothing of substance to back them up with. You shouldn’t be surprised that scientific communities don’t agree and you certainly don’t need to look for conspiracies or epistemic collapses or any other sociological phenomena.

    If you want to have a serious scientific discussion about the status of SUSY, we can continue. I’m happy to answer anything I can about why we pursue SUSY in moderate detail.

  14. zzz says:

    doesn’t the party making the claim “SUSY” have to provide the evidence ?

    you are requiring others to provide evidence its wrong ?

    i claim “UNICORNS”, you cant prove me wrong!

  15. Peter Woit says:

    zzz,
    Yes, except they have repeated been proven wrong. The history is
    1. We predict unicorns observable at LEP or the Tevatron. Falsified
    2. We predict unicorns observable at the LHC. Falsified
    3. We can’t predict anything anymore about unicorns, but we insist on training young theorists to study them. No one can prove to us that they’re not out there somewhere.

  16. Andrew says:

    zzzz,

    Peter claims LHC results conclusively show that SUSY was a failed idea. Yes I’m asking for evidence and quantitative details. SUSY advocates do indeed have to find direct evidence for SUSY particles. That’s why they search for them at colliders.

    Peter,

    SUSY models don’t predict the softbreaking scale – it’s uncertain, as it depends on unknown softbreaking masses.

    For what it’s worth, you vacillate between saying that SUSY models cannot be ruled out because they don’t predict anything and that SUSY models were ruled out by the LHC. The truth is somewhere in the middle – SUSY models are hard to rule out because they don’t make sharp predictions for the softbreaking scale.

    I’m happy to continue if you or anyone else want a scientific discussion, but if we are just repeating ourselves I’ll call it a day.

  17. Peter Woit says:

    Andrew,
    My own attitude towards the MSSM has always been consistent: if you add 105 or more completely undetermined parameters to a theory you’ve created something horrendously ugly. And unfalsifiable. It’s a complete mystery why anyone is still studying this after 45 years of getting nothing out of it.

    What SUSY partisans have done is make endless claims about “testability” and “predictions” from this mess. This is bad because they are bogus. But then they’ve gone on to do something much worse. Whenever a “prediction” is falsified, they ignore this and make new “predictions”. One of the worst offenders has been Gordon Kane, and I’ve extensively documented how he does this, see for example
    https://www.math.columbia.edu/~woit/wordpress/?p=5793

    I don’t see how any sensible person can look at this story and interpret it as anything other than the story of a bad idea that failed badly.

  18. GS says:

    I mostly agree with Sabine in the video that you link to. The metrics of success in academic research have for decades been misaligned with the goal of true academic progress. The metrics incentivize people to obtain grant money (almost always from the government), publish as many papers as possible, and maximize citations to their published work.

    I remember becoming disillusioned with physics research when in graduate school in the early 2000s. I initially had an advisor who never seemed to be in the office, but who published a paper every few months on a different grant with a different collaborator in a different discipline. The papers were light on results and heavy on references to the their other 10-15 jargon-filled no-result papers on the exact same topic, some of which had large blocks of identical text. It just seemed like a big game where nothing was actually being accomplished, but this person was widely considered a “rock star” because of the large amount of interdisciplinary grant funding, publications, and citations.

    I eventually quit working for that advisor and found someone who was a more honest scientific researcher, but I witnessed the perverted effects of these misaligned incentives so much during my graduate studies that I chose to pursue a career in industry rather than academia. (That’s not to say that research in industry doesn’t have its own misaligned incentives, but that’s a topic for another post).

    Despite my experiences, however, I am uncomfortable with Sabine broadly stating “I don’t trust scientists.” (15:14 in the linked video). I feel that a statement like that is easily taken out of context by the ever-growing proportion of anti-science laypeople in her audience, and I don’t think that it accurately summarizes what she is saying.

  19. Andrew says:

    Peter,

    Oh sure. We can falsify and rule out things that some people have said about SUSY. I’m sure lots of things Gordon Kane has said were falsified by LHC data.

    However, although the LHC rules out Gordon’s statements about SUSY, it hasn’t ruled out SUSY itself.

    The distinction is important. Gordon Kane might have said pre-LHC e.g., that the Standard Model Higgs was below 100 GeV. It isn’t. Gordon’s statement about the Standard model is ruled out by LHC data, but data doesn’t rule out the Standard Model itself.

  20. Alessandro Strumia says:

    On the positive side, conferences for 60yr-70yr birthdays of physicists who worked on SUSY are nicely quiet, as old “achievements” can now be celebrated without fighting for who should get the Nobel prize.

  21. Alan says:

    I was disappointed by Sabine’s video. While she draws attention to some abuses she gives the impression that they are far more common, and far more cynical, than I’ve observed during my long and diverse career that exposed me to many academic disciplines. It may reflect the situation in fundamental physics, where I did my PhD with Stephen Hawking, as it has developed over the last forty years as evidenced by this blog. But I’ve been a Professor of Cognitive Science, Computer Science, Engineering, Psychology, and Statistics — and have collaborated with researchers in medicine, neuroscience, and psychiatry — and I’ve seen very little of the cynical academic strategies she describes. I’ve read about some disciplines where research sounds politically biased and I’ve certainly observed, particularly in medicine, low quality work which is unrepeatable, so I don’t deny there are problems. In my opinion every discipline has fashions, which some people (not me!) are better at creating and exploiting than others, but if these fashions don’t fit data or make progress (broadly defined) they are abandoned. The problem is for disciplines where it is hard to benchmark progress and correct the fashions. When I started my PhD I asked Stephen if there were any experimental ways to test quantum gravity (my thesis topic) and he said no with a big grin (I worried about this and left physics after a postdoc to pursue AI which at that time was in its early pioneering stages hence explaining the diversity of my career). Lacking experimental tests there are little ways to correct the fashions — and creating fashions is a basic aspect of human nature (as is learning to gaming the system to obtain grants and high citations).
    .

  22. Dave says:

    The problem I find with Sabine’s video is one I find with many of her videos. There are truths there-generally ones I feel are pretty self-evident to people in the first place-such as the incentives problem that rewards # of papers/citations/overhyping incremental results. I think anyone who works as an academic scientist and who is honest with themselves knows this issue. I actually don’t really think she states the issues here optimally, but that is a small quibble. Where it always goes off the rail for me is the jump from that to the sense that the “return on investment in science” is gone and the model we have is fundamentally broken. These statements are made with no evidence are are completely not supported by reality in my mind. I think Sabine is ignorant of most branches of science. Maybe she could look to recent huge advances in gene editing or immunotherapy for cancer as massive examples of recent breakthroughs in important fields built on years of fundamental academic work that would never happen in a new model based on short term incentives funded by the private sector. The way to fix the incentives problem is not to move to a radically different model, it is to reduce the # of people fighting for grant money, reduce the # of journals and the impact factor chasing, and not incentivize giving out grant money based on “preliminary” (read already finished) results and the counting of the same metrics listed above but instead based on the viability and goodness of the ideas. Changing these things can’t be done completely and wont fix all of the issues, but the main point is I think science is far less broken than Sabine does. If your view comes from the world of high energy physics, I get where one could think *all* science is broken-but I think this is a myopic view.

  23. Peter Woit says:

    Alan/Dave,
    My own experience has been seeing a very different situation in two related fields (pure math and theoretical physics). The research environment in pure math that I’ve seen is pretty healthy, although you can find some examples of the kind of problems Sabine is pointing to. The environment in particle theory is completely different. It really has collapsed intellectually to a large degree (the SUSY issues I’m pointing specifically to are just one example). That this has been going on for decades, just continues to go from bad to worse, and that there has been zero effort to do anything about it (other than attacking personally people who publicize the problem) drives Sabine crazy in a way that I can completely understand.

    But seeing the different situation in pure math and the complexities of why this is true make me very aware that very different things are happening in different fields, and if you’re not in the field you are not going to have the perspective needed to really understand what the specific problems are and how bad they are. I’m also very aware that the fields I know about are very much ones that have little to do with useful applications and making the world a better place (or at least making some people a lot of money). Most people are interested in evaluating science on its practical results, which is something I basically have zero expertise in.

    I’ll stick to criticizing what I know well, reaching a small audience, having little effect and being easily ignored. Sabine has decided to go for a big audience and try and have a bigger effect. She is much harder for people to ignore. I hope she’ll get attention for the real problems she is pointing to, get ignored when she goes overboard. The danger of course is the opposite.

    One interesting thing about this new video is her discussion of the new wave of bad papers generated by AI. Those fields of science where original ideas have disappeared and people are cranking out rote worthless papers may be ones that in coming years will be overwhelmed by AI. When LLMs can put out an infinite number of papers indistinguishable in quality from the current research literature, what happens to fields where people are evaluated by the number of such things they can write?

  24. Follower says:

    Peter,

    6:23 through the video, Sabine says ‘’I’ve heard faculty members admitting that the relevant section of their grant proposal is made up of nonsense, more than once.’’ She told a similar story in one of her previous videos in which she read an email that she claimed was sent to her by a physicist from a top institution. Should we take her at her word without evidence? Do faculty, in your long career and experience Peter, ever really admit openly among colleagues that they write nonsense in their grant proposals or in their research papers?

    Thank you!

  25. Peter Woit says:

    Follower,
    My experience with Sabine is that she is honest to a fault, so I think you can trust that such stories have a basis in reality. In my own experience, this kind of thing is rare, but not unknown. Some people will openly express extreme cynicism about grant proposals and some research papers (mostly other people’s though, rarely their own). Hard to know how many people think this way and don’t say anything. I suspect it is very field dependent. One would hope that in fields producing huge numbers of worthless papers that the people doing it have some self-awareness.

    The problem with such stories is not that they’re untrue, but that the small number of them means they don’t tell you anything about whether there’s a general problem as opposed to a very localized one. If you could get everyone to speak honestly, in unhealthy fields I suspect you’d find relatively few people saying their own work was crap, but many saying their field’s literature was full of crap and the incentives were pushing them to produce much lower quality work than they would like to be doing.

  26. Dave says:

    Hi Peter-
    I think the issue I have is that:
    1.HEP is a tiny slice of science by activity overall, and has always been despite the sense projected in the early post WWII era given the outsized influence of the Los Alamos generation of theorists and the generations that followed.
    2.A discussion for another time but the lack of health of HEP must at least partly if not largely be due to its own success in those generations. If you are left with trying to understand the standard model at a deeper level (i.e. connections between parameters and why they appear at the scales they do) or marrying forces that operate at scales so disparate in orders of magnitude then things are most likely going to look stalled for a long time. I don’t at all doubt that group think and ideological capture has made this worse, but I strongly suspect that post 1980 HEP would still see little progress even if string theory and the like had not existed. Just my own (uninformed as someone not working in this field) guess.
    3.Other fields of physics are not really suffering at all. Condensed matter is doing fine. AMO same. Biophysics, which hardly existed 25 years ago, is moving along just fine. Other areas (computer science, large swaths of chemistry and biology) are doing just fine. All of this may change but for now-there is no justified alarm.

    The way funding works is an issue, and the overgrowth of academia and the way many people judge other in the field are also problems. But I feel the net positive production now isn’t dramatically down, is actually quite positive, and the issue-real issues pointed out by Sabine-can largely be address without the radical calls made by podcasters who make sweeping often unsupported statements which extrapolate way beyond their areas of expertise. To me the bigger danger is the fact that these people have a huge uneducated platform of people who end up getting part of the right idea and a lot of the wrong idea which is more damaging then the actually problems discussed.

  27. CWJ says:

    My own experience matches that of Dave. I’m in low-energy nuclear theory, where we have both a long history of phenomenology with relatively recent advances in much more rigorous ab initio approaches, which we apply to a wide breadth of behaviors, driven by new experimental data. We have a healthy community with many young people who bring new ideas and new energy, and who are supported by more senior researchers. There are egos and competition, of course, but, strategically encouraged by the US funding agencies (NSF and DOE), we have a good mix of collaboration and individual projects, even in theory. Of course nothing is perfect, but we constantly talk about how to improve. Just this past week I was at a community meeting discussing just that.

    Over a decade ago, using NSF data I found through this blog, I calculated that HEP theory is only about 1% of the NSF physics portfolio (and string theory only 0.5%), a fact I like to use to remind people that HEP physics is not all there is to physics.

    This is my pet peeve (and it extends to this blog, which I otherwise enjoy, and to Sabine, who I otherwise admire): conflating a small subfield with all of physics, or, worse, with all of “scientific research,” and then saying or implying the entire endeavor is problematic. This is misleading and wrong. I’m frankly tired of it.

  28. Peter Woit says:

    Dave,
    Just one comment about the HEP theory problem. Yes, one way the situation is unusual is that the field is a victim of its own success: the SM was in place by 1973 and it is so powerful and successful that it is extremely hard to do better. In an alternate universe with no string theory/SUSY it is quite possible that we still wouldn’t have found a way to do better.

    What’s bothering me now is not so much “we haven’t been able to make progress” but more “we’ve collapsed the environment in which progress might take place, and salted the earth to make sure it can’t be regrown”. Instead of failures that we should learn from, string theory and SUSY are considered foundational parts of the subject. Columbia this fall will be teaching an undergraduate course in string theory. Lots of theory grad students are being trained in the intricacies of SUSY. If the best students in the field are being trained from their undergraduate days that the way forward must come out of string theory and SUSY (while at the same time being given incoherent and incorrect training in the successful basics of the SM) we’re moving backwards, not forwards.

  29. Peter Woit says:

    CWJ,
    I do try (if not always successfully…) to make clear that what I am criticizing is specific to certain specific subfields of theoretical physics, where failed ideas and hype have come to dominate.

  30. Peter Orland says:

    For noncompact Lie groups, the mistake is even more glaring, as there may be elements of the group which are not exponentials of elements of the corresponding Lie algebra. Such groups have elements for which the logarithm does not exist.

  31. Peter Orland says:

    For example: if
    $\lambda$<0, then the matrix
    \left( \begin{array}{cc} \lambda&0\\
    0&\lambda^{-1}\end{array}
    \right) is in SL(2,R), but it cannot be written as the exponential of i times an element of sl(2,R).

  32. Peter Orland says:

    Slight correction. Noncompact Lie groups may have an element whose logarithm is not in the corresponding Lie algebra.

  33. Peter Woit says:

    Peter Orland,
    The depressing thing is that the confusion is often much more basic, see for example the paper I linked to in the update.

  34. Jerome M says:

    Peter,

    There is a common theme in your post of semantics. I agree with Dave and CWJ, but from a materials science and chemistry perspective. Referring only to the linked SH video, I watched and found it infuriating because of the wildly imprecise language. No, research is not broken, there are many thriving fields. And her repeatedly saying “I don’t trust scientists” is to me the height of irresponsibility in the current climate. Not only is funding being dramatically cut, but foundational ideas like the germ theory of disease are being set aside by politicians in power in the US.

    Regarding LLM generated papers, an example of one that actually fooled any semi-competent researchers would be interesting.

  35. Alex says:

    Good you point that about the utter sloppiness on material that should be quite basic for a theoretical physicist, Peter.

    I remember when I was studying this material for the first time, my first encounter with it was in QM mechanics textbooks and in QFT textbooks (all oriented for physicists). I was having a hard time trying to understand it because I could not see any logic in the way the equations were being presented and written one after another as if they were logically obvious derivations. Of course, they were not because those steps were all technically wrong (althought the results at the very end were often correct).

    The solution for me was just to learn this material from actual math books (of which there are plenty on this specific topic), and also from books on QM and QFT but “for mathematicians” (of course, you will not get the best physical artisanal craft to solve hard QFT calculations from that, but the basic concepts are presented in a much better way!)

    It takes much more time, though. And in today’s frenetic pace and pressure to publish in BS theories, people take the shortcuts, which lowers the quality even more. I took my time simply because I enjoyed to learn this and wanted to really understand it in a logically sound way. I blame the QFT culture of particle physicists for introducing this sloppines into theoritical physicists. Since there’s not much to do in terms of new theories today, maybe it should be time to put/redirect some energy and effort to overhaul the whole way this material is presented and put it to a better standard, as it happened with GR with textbooks like Hawking & Ellis and Wald (to name a few).

  36. George says:

    I used to read Sabine’s blog but haven’t been catching up with her since she moved to Youtube, which is not my favourite way of communicating ideas/news, so I didn’t know that she is promoting billionaires.

    However, I do agree with Sabine that many aspects of current research and research practices are broken. To be fair, I think you can only really appreciate what she is saying if you have been marginalized in one way or another. For example, I have been pushed out of academia (i.e. unable to find a permanent position), “for not having enough papers and citations”. This is despite publishing in top journals like PRL in different fields (statistical mechanics and biophysics).

    If for you everything is great, then it is very unlikely you will read about “trouble in paradise”, and unlikely you will bring any change to this ridiculous system we have now, which completely discourages risky, worthy science and sends a message that you better do what ever everyone else is doing, because strength is in numbers.

    Also, people who are in academia are really in danger if they voice their concerns because their papers and grants need to be funded, and you will not get funding if you are making a lot of enemies (even if what you are saying is the truth).

    For those who think everything is great, have a look at several websites that I follow regularly, the ReactionWatch, the PubPeer and the blog “For better science”. If these websites do not make you realise that science is in real danger, then I don’t know what will.

    Once you realise that we can have Nobel Prize winners retract dozens of papers without facing any consequences, you will start to wonder what is that we are doing here and trying to achieve.

  37. Alan says:

    Hello Peter,

    I like the fact that you write about what you know — physics, maths, and Columbia — and give a reality check to hype and misinformation. I was disturbed that Sabine seems to be moving to to misinformation.

    You, Sabine and some commenters mentioned LLMs. As an AI researcher who uses them, I think they could be useful as tool to identify unhealthy research fields. Rather like the Alan Sokal affair when he got a spoof paper on Quantum Gravity and Hemeneutics published, but at a far bigger scale. I predict some AI researchers will try to do this as a scientific experiment. Interest in what LLMs can do is high and there is a new conference at Stanford where papers are written by AI and reviewed by AI. A Turing test, pitting LLMs against peer review, would help find the limitations of LLMs and the reviewing process in different disciplines (but would reviewers be willing to take part in such a test?). I’d predict current LLMs would fail badly in healthy fields which might drive AI researchers to improve them. And if LlMs can really produce work t

    At present I’m sure that no LLMs could write a paper that would pass peer review in the leading AI conferences, though there are complaints that a few reviewers over rely on them (suspected incidents are reported). I’d guess the same would be true for maths or physics. And if LLMs were developed that could, then this would be either a scientific advance (to be celebrated) or a sign that peer review needs to be significantly changed (as always, there is much wrong with current review).

    Alan

  38. Peter Woit says:

    All,
    Enough about Sabine Hossenfelder for now. I agree she’s making a mistake sometimes with general criticism of “all of science”, a topic that I don’t think there’s anything sensible to be said about.

    Alex,
    The problem is that you can’t just tell physicists to read the math literature. In many subjects the math literature is not very readable, and it’s typically aimed at generality, so digging out of it the specific cases relevant to physics can be very difficult.

    Spinors are a good example. You can find various versions of the story of spinors and rotations in the math literature, but almost always doing things for arbitrary dimensions and arbitrary signature. Mathematicians rarely write down the details of the specific case of 3 and 4 dimensions, which are very special (in particular the phenomenon of the spin group not being simple (in signature (4,0) and (2,2)).

    Physicists often recognize that students need an expository discussion of this subject beyond the formulas in every textbook, and then go write something really confused, because they are confused about this. This is a bit disturbing: if you’re confused about something and you try and teach it to others, it’s very bad news if this doesn’t make clear to you that you are confused and encourage you to become unconfused. Bad to be confused, worse to not even be aware you are confused.

  39. Amitabh Lath says:

    Hi Peter, have you seen the movie “Particle Fever”? We show it to the high school students in our summer program on fundamental physics. These kids are among the top physics and math students in New Jersey, and love everything physics. We show the movie and discuss it. There is a pretty clear message: Nature has to choose between SUSY (Kaplan) and Multiverse (Nima). So a statement like there is no SUSY is interpreted as Multiverse must be real. We manage to convey our experimentalists’ disdain for Multiverse but it’s harder to kill than SUSY.

  40. Peter Woit says:

    Amit,
    I wrote about Particle Fever when it came out, see the review here:
    https://www.math.columbia.edu/~woit/wordpress/?p=6308
    especially the nonsense about the multiverse. The fact that the multiverse “predicts” 140 GeV for the Higgs mass has conveniently been forgotten. You might want to emphasize that for the students…

  41. I saw that film when it came out, but I had forgotten that dichotomy. It was a high-quality production, it is a pity it wasn’t immune to including the more speculative stuff in whith the hardcore experimental and solid theoretical physics, which are magnificent achievements on their own.

  42. Gavin says:

    One of the things I found confusing studying group theory during my Physics PhD was how mathematicians would define a representation of a group in terms of the matrices that represent the group elements, while physicists tend to define a representation in terms of the vectors in the vector space. (ie, “the electron is a spinor rep of the Lorentz group” instead of something like “the electron is a spinor in the spinor rep of the Lorentz group.”). After I realized that’s what was going on it made sense, since mathematicians are typically more interested in the group itself, whereas physicists are typically interested in particles/fields which transform under group transformations. But the lack of precision made it confusing to learn what was going on.

    Over the years I found a lot of physics notation/language is like that — it is optimized for doing calculations but often not for pedagogical clarity or conceptual precision.

  43. Peter Woit says:

    Gavin,
    The confusion is inherent in the fact that a representation is a vector space together with linear maps on the vector space. If you want to be more concise, depending on the context you might just specify the vector space (when it’s clear what the representation is). Not just physicists do this, mathematicians too. Main difference is that sometimes physicists are hazy about what a representation is, and more likely to use ambiguous and confusing language to refer to it.

Leave a Reply

Your email address will not be published. Required fields are marked *