Skip to main content

Physics colloquium on CERN's LHC

The semester has started and the first colloquium in the Dept. of Physics here at IIT Madras happened today. The speaker for today was Dr. Kajari Mazumdar, a faculty at TIFR, Bombay. She talked on the topic 'LHC : The unbelievable pursuit of the unimaginable' and over the course of her talk, she described how the unimaginable physics (to most of the public) is being brought to life using unbelievable machines and brain power.

Personally speaking, I have come to admire speakers who put numbers in their popular talks. Early on in my under graduation here at IITM, during a popular session on astronomy, I was asked to stop speaking in qualitative terms and be  quantitative, especially given my interest in pursuing physics after graduation. I have come to understand how beautiful stories can be woven around numbers and have come to admire professors who give a better perspective of their work and the research area in general by quantifying it in a way the layman can understand.

To be specific, the speaker started talking about what scales the LHC works at, in terms of energy. And to put that in perspective, she quantified what the typical energies of a molecule of air is at room temperature, of a chemical or a nuclear reaction and the rest mass of a proton. LHC deals with beams of protons (and sometimes lead ions) that are smashed into one another, each beam having an energy of 4 TeV. Again, to put that number in context, most chemical reactions happen at 1 eV and nuclear reactions happen at 1 MeV. If that doesn't help you understand the amount of energy involved, let's look at it another way. In order for the beams of protons to have an energy of 4 TeV, they are accelerated in particle accelerators to speeds of 99.99999% that of the speed of light. Again, this isn't done in one step, 5 accelerators are used sequentially to boost the protons to such high speeds or energies. In the future, the LHC is supposed to work at 6.5 TeV.

Moving on, the speaker talked about why the LHC is necessary in the first place. Popular media hypes up the end result of a physics experiment and the process involved in obtaining this end result, while as fascinating as, or maybe even more than, the actual result, is often neglected. In a similar fashion, while the public is of the opinion that the discovery of the Higgs Boson was the sole purpose of the LHC, the speaker pointed out that the search for the Higgs Boson was not the first, and will not be the last, on the list of things LHC is needed for. The discovery of the Higgs Boson was necessary to prove or disprove the standard model of physics, which Shruti Patel wrote about earlier on this blog`. Having discovered the Higgs Boson, the other experiments running at LHC are being made popular such as the experiment involved in understanding CP-Violation, which is the reason why there's universe is matter dominated and there is so little free anti-matter, which Shruti also talked about. She went on to talk about physics beyond the standard model but I digress. The speaker talked about how over the years scientists developed the theory to unify the electromagnetic force, the weak force and the strong force and how the discovery of the standard model would prove this theory beyond doubt.

Towards the latter part of the talk, the speaker talked about how the detectors at the LHC actually work, specifically the ATLAS and the CMS detectors, both of which were crucial in discovering the Higgs Boson. She talked about the challenges in handling the huge amount of data coming from the detectors, 60 TB/s of raw data to be exact, which is reduced to 150 GB/s of meaningful data of which 225 MB/s is finally stored for offline analysis. A large amount of computing power goes into making sense of the raw data that is produced at the instruments which needs to be converted to data which scientists can make sense of and produce meaningful results from. It's common knowledge that the World Wide Web gained traction as an academic project at CERN, to share data and work collaboratively. Even today, along with physicists, electrical engineers, computer scientists and material scientists work at CERN to develop more sensitive and accurate detectors and to handle the massive data generated at the instruments. It is an engineering marvel in the true sense of the word and one of the greatest achievements of the academic community in the last decade.

I have personally attended a couple of other talks on particle physics, specifically two on neutrino detectors and read up a bit on neutrinos myself when neutrinos were (erroneously) found to travel at speeds greater than that of the speed of light. The design and technology that goes behind particle detectors never ceases to amaze me and the speaker was eloquent in putting forward why she thinks the LHC is big deal in the scientific community.

Sorry for not adding links at relevant places, I'm past my 30 min limit and I shall add links tomorrow. There's always more to talk about...

EDIT : added links to articles by Shruti.

Popular posts from this blog

Animation using GNUPlot

Animation using GNUPlotI've been trying to create an animation depicting a quasar spectrum moving across the 5 SDSS pass bands with respect to redshift. It is important to visualise what emission lines are moving in and out of bands to be able to understand the color-redshift plots and the changes in it.
I've tried doing this using the animate function in matplotlib, python but i wasn't able to make it work - meaning i worked on it for a couple of days and then i gave up, not having found solutions for my problems on the internet.
And then i came across this site, where the gunn-peterson trough and the lyman alpha forest have been depicted - in a beautiful manner. And this got me interested in using js and d3 to do the animations and make it dynamic - using sliders etc.
In the meanwhile, i thought i'd look up and see if there was a way to create animations in gnuplot and whoopdedoo, what do i find but nirvana!

In the image, you see 5 static curves and one dynam…

Pandas download statistics, PyPI and Google BigQuery - Daily downloads and downloads by latest version

Inspired by this blog post : https://langui.sh/2016/12/09/data-driven-decisions/, I wanted to play around with Google BigQuery myself. And the blog post is pretty awesome because it has sample queries. I mix and matched the examples mentioned on the blog post, intent on answering two questions - 
1. How many people download the Pandas library on a daily basis? Actually, if you think about it, it's more of a question of how many times was the pandas library downloaded in a single day, because the same person could've downloaded multiple times. Or a bot could've.
This was just a fun first query/question.
2. What is the adoption rate of different versions of the Pandas library? You might have come across similar graphs which show the adoption rate of various versions of Windows.
Answering this question is actually important because the developers should have an idea of what the most popular versions are, see whether or not users are adopting new features/changes they provide…

Adaptive step size Runge-Kutta method

I am still trying to implement an adaptive step size RK routine. So far, I've been able to implement the step-halving method but not the RK-Fehlberg. I am not able to figure out how to increase the step size after reducing it initially.

To give some background on the topic, Runge-Kutta methods are used to solve ordinary differential equations, of any order. For example, in a first order differential equation, it uses the derivative of the function to predict what the function value at the next step should be. Euler's method is a rudimentary implementation of RK. Adaptive step size RK is changing the step size depending on how fastly or slowly the function is changing. If a function is rapidly rising or falling, it is in a region that we should sample carefully and therefore, we reduce the step size and if the rate of change of the function is small, we can increase the step size. I've been able to implement a way to reduce the step size depending on the rate of change of …