11 results found with an empty search
- SIMPLE HARMONIC MOTION – PART 1
SIMPLE HARMONIC MOTION – PART 1 Look at the world around you. What do you see? Your room? Beautiful walls, plants of a magnificent verdant hue, a sparkly blue sky and birds chirping joyfully somewhere in the far distance? Or, if you live in a city, the hum of a drilling machine incessantly boring through some annoying neighbors flat at midnight, or the immense skyscrapers somewhere in the downtown. Right now, as I am typing this, I am pressing on the keys of my laptop, which promptly pop back up after I leave my fingers. What is common between all of these? Well, first, I shall apologize for that woeful image I presented. I am sure your surroundings are not as ideal as you’d like them to be, but, as an outside observer, I do have to make assumptions and approximations, do I not? The simple harmonic oscillator is the most marvelous, and mystifyingly accurate approximation that describes this reality. All around us, the motion of everything in equilibrium, everything remotely periodic and even non-periodic, the motion of anything that transfers energy, is governed, at the simplest, molecular level, by the laws of simple harmonic motion. This article is just the first part of a many part journey into how oscillators work, and how they combine to transfer energy in the most wonderful of ways, and how the idea of an oscillator as the basis of reality arises most naturally from the dynamics of continuous media. So, let’s start this exploration. In this article, I hope to cover the following: 1) Hooke’s law 2) A derivation of the trajectory of a particle attached to a spring, and 3) A look at key concepts in SHM, such as frequency, time period, and amplitude HOOKE’S LAW Here’s the thing – no discussion about simple harmonic motion would be complete without Hooke’s law. Hooke’s law, simply put, governs the behavior of an ideal spring subjected to an external force. What is an ideal spring? Well, an ideal spring is massless and, um, obeys Hooke’s law. Put simply, the statement of Hooke’s law is this: the extension of an ideal spring under an applied force is directly proportional to the applied force, i.e., More rigorously, it is often written as: Where k is the spring constant of the spring. This is nothing but a constant of proportionality. Now, consider the following situation: we have a mass m attached to an end of a string, and have applied a force F to extend it. What force does the spring exert on the mass? Since the mass is in equilibrium, this force must be equal and opposite to the applied force kx on the spring. Thus, for a mass connected to a spring, the force exerted by the spring is: This is perhaps the most fundamental equation in all of classical mechanics. Now, what does a spring constant mean? Well, looking at the equations above, if the spring constant of a mass-spring system is high, it means that, for the same external force, the system would extend, or deform, to a smaller extent. Thus, the spring constant measures the rigidity of a spring. We will not get into matters of material science in this series, but the idea of a force being directly proportional to extension, connected by a constant, is something that simplifies the reaction of materials to external inputs. So, so, much. Let us reflect, for a moment, that Hooke’s law is not just a mathematical statement, or even a fundamental theorem about the workings of reality. It is a statement of causality. It defines the idea of a spring constant, and by doing so it connects a measurable output, or extension, to some kind of measurable input, or force. Thus, in and of itself, Hooke’s law is a remarkable discovery that connects how matter deforms under inputs. With that, we come to the next important section of this article. NEWTON’S LAWS AND THE MOTION OF A PARTICLE We now have an expression for the force experienced by a particle on a spring. The next logical step here would be to derive an expression for the exact motion of the particle, as a function of time. How? Well, we can start with the other causal relation we know of: Newton’s second law, that the force on a particle is equal to its mass times its acceleration. For a particle on a spring in a vacuum, the net force will be the force exerted by the spring. Thus, we get: Then, realizing that acceleration is the second derivative of position, How do we solve this second order differential equation? Well, we assume the solution x(t) is of the form: Substituting this into our equation and doing some calculus gives: Thus, we have two options: either A is zero, which is a trivial solution (that would be the solution to any linear differential equation), or Thus, we get: Note that we have two values of r, and thus our solution is a linear combination of both the possible solutions. Now, define: Thus, our general solution to the equation is: Well, the motion of particle is obviously real. We can actually simplify x(t) by writing it purely in terms of sines and cosines – an exercise left to the reader. Once the algebra is done, we get: Thus, the motion of a particle on a spring is an oscillation back and forth about a mean position! Isn’t that wonderful? A graph is below, for reference: A graph of x(t), with A, ω, φ set to 1. Credit: Desmos KEY CONCEPTS IN OSCILLATIONS Now, we look at the expression for x(t) and see what is in it. The term A is known as the amplitude of the oscillation – it is the maximum displacement from the mean position. The quantity we defined as ω is the angular frequency of the oscillation. The time period of the oscillation is thus: The factor of φ in the argument is the phase of the oscillator. This simply tells you at what point in the oscillation cycle the particle starts. It does not really matter when you want to analyse the motion of oneparticle. However, when you have multiple oscillations superposed on each other, these phases can interact with each other and affect the overall motion. What do I mean? We will come to that later. For now, this introduction should be sufficient to set some of the mathematical background for the upcoming articles, as we will be diving into some beautiful mathematics. However, for now, I hope you understand the basics of simple harmonic motion, and the fundamental idea that, when a particle experiences a restoring force that is directly proportional to its displacement, the subsequent motion is always simple harmonic, and periodic. In the next article, we shall talk more about this, treating the system from the point of view of conservation of energy, and looking at damping as well. Till next time!
- The Energetic Forces Behind Faraday's Law
-By Rohan Joshi Picture this – you are in an electromagnetism class, let’s say this is middle school, and your teacher tells you, for the first time, that Faraday’s law is a thing – that a change in the magnetic flux through a region induces an electromotive force (well, voltage), across the wire/curve through which the field cuts to oppose the change in flux. And this is an extremely powerful law – it is the basis of generators, of inductors, of the LC circuit, and had, at its first conception, astounding implications for the nature of the electromagnetic field. It seems a little counterintuitive, does it not? That the magnetic field can affect a change in the electric field? Well, I have found an intuitive explanation for this law. It is perhaps not the most mathematically or physically rigorous explanation, but to any of you who felt that this law seemed extremely arbitrary, founded entirely upon empirical evidence (not a bad thing, but knowing that some kind of theoretical background exists for a law always makes on feel better). MAXWELL’S EQUATIONS Now picture that you are in high school – or college, and your teacher, once again, tells you about Maxwell’s equations. Here they are, to jog your memory: These equations specify the not just the nature of the electric and magnetic fields, but also their evolution through time. These equations are articulate, beautiful, as they also give us an idea as to how these fields are generated. The first one, Gauss’ law, tells us that an electric field is generated by a charge – we shall show that a varying electric field is actually equivalent to a current! Secondly, equations (4) says that magnetic fields are generated by moving charges – currents. Note the inclusion of the changing electric field. Lastly, the second one is simple – no magnetic monopoles exist. The third one never really made sense to me – that is Faraday’s law, by the way. Or the Maxwell-Faraday law if you want to be pedantic. After all, (1) and (4) simply stated the origin of the electric and magnetic fields from their respective sources. Maybe (3) would make sense if magnetic “charges” existed, but (2) makes sure that this cannot happen. So, why, why does equation (3) make sense? Well, an explanation is coming up, and it involves perhaps the most important statement in all of physics – the conservation of energy. BACKGROUND: RADIANT ENERGY As we all know, electromagnetic energy travels in the form of waves – electric and magnetic fields oscillating at the speed of light through a vacuum, or any medium for that matter. However, these waves do not exist just for fun – they propagate energy through the medium. We will use this fact, along with the fact that electric and magnetic field oscillate perpendicular to each other, to demonstrate the nature of energy flow through a given region of space. The direction of propagation of these waves is that of the cross product of E and B. Now, consider a closed volume within space. The total energy E coming out of this volume will be proportional to the flux of the propagation vector through this region. Define F = k(E x B), where k is a scalar constant. Thus, via the Divergence theorem. Now, if no energy is flowing out of a region of space, this tells us that This is a fact that we will use later in the article. ENERGY DENSITY Another quick piece of background. The total energy density of the electromagnetic field in a vacuum is given by The total energy of the field is thus In the case that there is no outflow or inflow of energy, CURRENT-FIELD EQUIVALENCE: A SLIGHT DETOUR Last bit of background – once this is established, the explanation behind the third equation is pretty quick. Consider Ampere’s law: This tells us that a magnetic field B is created by a current. Now, the equivalence of current with a changing electric field can be shown in the case of a closed surface. Thus, a time varying electric field in a close volume is equivalent to a current. Substituting the above into Ampere’s law in the absence of any “actual” current (the above quantity is called the displacement current), we get Which is equivalent to Maxwell’s fourth equation via Stoke’s theorem. Note that the surface integral has become open now in order to accommodate the formation of a magnetic field along any curve on the volume we are considering. The concept behind Faraday’s law is a simple application of conservation of energy. We know from above that Thus, And that is exactly equal to the third equation! Thus, through a simple application of the conservation of energy, we have proved that a time varying magnetic field produced an electric field! Physically, this can be thought of as follows. Consider a constant (in time) electromagnetic field existing in a given volume of space. Naturally, the total energy contained within this volume will be constant. However, if there is small “blip” in the magnetic field, that is, a small variation in time, the total energy of the volume changes. However, since this cannot happen, something else must happen to resist this change- the creation of a new electric field, whose curl is equal and opposite to the variation of the magnetic field with time! There is also an interesting link between the creation of a “rotating” electric field, i.e., a field with a non-zero curl, and the equivalence between electric field and current we established. A time varying electric field will itself create a magnetic field, just like a current, to cancel out the “blip” in the existing magnetic field. Further, if there is actually a charge or wire present, the current induced will have the exact same effect! So, that’s it! This may not be the most rigorous explanation, but this discovery provided me with a sense of closure – that Faraday’s law is not arbitrary. And I hope this article provides you with a deeper understanding of the same – that this law stems from the beautiful, infinitely elegant interplay that is the law of conservation of energy – that an induced electric field is simply a manifestation of energy being transferred from the magnetic field to an electric field/current. CITATIONS: Cadence System Analysis, et al. “The Energy Density of Electromagnetic Waves.” The Energy Density of Electromagnetic Waves, 13 Oct. 2022, resources.system-analysis.cadence.com/blog/msa2021-the-energy-density-of-electromagnetic-waves.
- Merging Nature and Technology With Biological Computers: Our Strongest Tool Yet
- By Viswanath Missula In the world of technology, a groundbreaking concept is emerging, combining the power of biology with the world of computing. Biological computers, living organisms with computational capabilities, hold the capacity to revolutionize fields and open up new possibilities to improve data processing, healthcare, and sustainability. At their core, biological computers utilize the inherent computing capabilities of biological organisms to perform complex computations. Akin to how the human brain manages to execute a ridiculous number of computations in its functioning, biological computers attempt to harness this ability to perform computations as required by us. However, a key point in this ongoing research and experimentation with biological computers is not to produce alternatives to traditional silicon-based computing, but rather to devise computational devices capable of tackling problems that this silicon-based computing is unable to address. One such application of these biological computers is DNA computing. This approach utilizes the unique properties of DNA molecules to perform computational tasks. DNA, with its ability to store vast amounts of information, offers a potent platform for information processing. In fact, this concept is hardly new. Back in 2002, a group of researchers managed to develop a DNA computer that a human could play tic-tac-toe with, which also ensured that the best outcome the human could achieve was a draw. Since then, we have continued to utilize DNA molecules’ interactions with their enzymes, non-coding regions and foreign particles to create computing systems. More impressively, biological computers are offering a promising approach towards addressing environmental challenges. One such remarkable application is their use as biosensors to monitor and detect pollutants. By engineering living cells with certain specific computational capabilities, researchers have been able to create biosensors that respond to changes in their environments and generate real-time data on pollutant levels. This is largely accomplished through internal changes within the cells as a result of the environmental changes being detected and converted into a directly interpretable form. Naturally, the journey of these biological computers is still in its early stages, and researchers are still exploring and refining the capabilities of these systems. Who knows where the field will go in the future? It might be just as likely to hit a dead end or an epiphany in the coming years. Regardless, even current biological computers and how we are able to use them today already demonstrate a plethora of possibilities to be implemented into numerous fields.
- The Monster Group: A Mathematical Analysis of Group Theory
- By Siddharth Velan The monster group is also called the ‘friendly giant group’ and was constructed in 1982 by Robert Griess as a rotational group in an n - dimensional space. We will come to what that number n is after a brief introduction. Formally, a group is defined by taking a finite set and some operation or function that combines any two or more elements of the set to produce or replicate another third element of the same set, in such a way that the operation is valid for all properties such as associativity and the presence of an identity element. Every element must also have an inverse. In simpler terms, a mathematical group is an abstract set of numerals with operators added to it in such a way that all its identities are preserved. Geometrically, a group is arbitrarily defined as the possible conditions for symmetry or transformation- i.e. rotations, switching points, flipping etc. Groups are fundamental in math and have other applications as well, such as the Poincare group (space-time relativity symmetry) and the Galois group (symmetry of the roots of a polynomial). Like these, many other groups are used for various purposes. The most basic fundamental groups, groups that cannot be divided into subgroups without any other function applied to it, such as integers. Similarly, there are 18 countably infinite simple groups for which a pattern can be identified and applied. 26 other simple groups exist called sporadic groups and these are seemingly random with no identifiable system. The largest of all these so random groups is the monster group. Coming back to the opening statement, a group can be applied to a number of dimensions, with a certain application for each. For any value of n, there is an n-dimensional representation of the group of order n! (n factorial), called ‘the natural permutation representation’. Our monster group is represented in a whopping 196883 dimensions. The order of the gargantuan group is: 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 or 246 · 320 · 59 · 76 · 112 · 133 · 17 · 19 · 23 · 29 · 31 · 41 · 47 · 59 · 71, which is a number unimaginable to us. An element of this group would be represented as a matrix similar to this: except there would be 196882 rows and 196882 columns. (excluding complex-dimension elements). The monster group has a set of 20 sub-sporadic groups including itself, deemed ‘The Happy Family’ by Robert Griess. The other 6 were named ‘Pariahs’. Although renowned, the group doesn’t particularly represent anything. Like geometric groups where you can visualize and apply the elements of the group, the monster group doesn’t have such pictorial representation. Fascinatingly, based on this, the monstrous moonshine conjecture was developed, establishing the unexpected connection between the monster group (M) and modular functions, in particular, the j function. The j function elicits elliptical structure along the complex plane, something outside the scope of this paper. In this Fourier expansion of the j function, q=e^2πiτ , (half-period ratio of an elliptical function) the expression can be expressed in terms of the linear combinations of the dimensions of the irreducible representations, Say r of the monster group (M) including a couple minute coefficients: Let r equal 1, 196883, 21296876, 842609326, 18538750076, 19360062527, 293553734298, ... then where the numbers on the left are the coefficients of the j function, while the ones on the right denote the dimensions, r , of the monster group (M). While such results are not coincidental, the exact relation between the monster group and other fields of physics is yet to be determined. Hence, while the monster is indeed unique and fascinating, the in-depth nature of it is incomprehensible to us as of now. What its applications can turn out to be is an event for the future. References: “Group theory, abstraction, and the 196883-dimensional monster” – 3Blue1Brown Monster Group – Wikipedia Monstrous Moonshine – Wikipedia Monster Group – WolframMathworld J-invariant – Wikipedia Mathematical Group Theory – Wikipedia The Friendly Giant – Robert Griess, 1982
- Hidden Math Behind Art: The Fascinating Geometry of Tessellations
- By Lalitha Rao Art and Math are two subjects that seem incredibly different, and while they may be in some aspects, they're in actuality, simultaneously similar. This review article is written in hopes of shining light on a form of art that relies on a fundamental understanding of geometry; Islamic Art. The famous Agra Fort draws millions of tourists every year. One of the many gems of the monument is the intricate artwork. These geometric patterns are mostly seen in places of worship, the stunning patterns repeat almost infinitely, looming over wishful worshipers, thus creating a divine presence. These masterpieces are created on the basis of tessellations. For instance, you may have made a checkered pattern with colored pieces of paper. This is nothing but a tessellation, wherein our tiles (squares of paper) form a repeating pattern covering the backing, clicking in perfectly with the next, consequently ensuring the absence of overlaps. The above is what we call edge to edge tiling. Here, every edge of the square touches the next sharing the full side. The familiar “brick wall” is an example of non-edge to edge tiling, as one full side is not shared. There are several other forms of mathematical tiling, but most Islamic artwork employs the edge to edge form. There are two types of tessellations: regular and semi-regular. Regular designs are symmetrical, utilizing only one regular polygon, i.e, a shape in which all sides and angles are equal. For the sake of tessellations, art researchers have observed that only three shapes are able to form regular patterns. These shapes tessellate on their own and do not need others to fill in voids. The needed shapes are an equilateral triangle, a square, and a hexagon. It is interesting that we see this in nature too, such as in a honeycomb or the outer layer of a pineapple, which are prominent examples of hexagonal tessellations. For semi-regular tessellations, we look at shapes that cannot tessellate on their own and hence they are more of a collection of shapes. The regular pentagon is a prime example. A vertex is where the spades meet. The internal angles of a pentagon are 108. Mathematically, since this number is not a divisor of 360, there is a tessellation gap left. To resolve this, artists added other shapes such as a decagon and a hexagon and then tessellated this pattern. In relation to tessellations, Islamic art is based on the divisions of a circle, there are several ways to divide a circle but most designs fall into one of the three categories. Fourfold: these are patterns that can be based on the division of a circle into four equal sections. Fivefold: these are patterns that can be based on the division of a circle into five equal sections. Sixfold: these are patterns that can be based on the division of a circle into six equal sections. Fourfold patterns fit into a square grid and hence they are fairly simple to tessellate. The same goes for sixfold, except they fit into a hexagonal grid. Fivefold patterns must be connected with other shapes to repeat, for the aforementioned reason that pentagons tesselate semi-regularly. A recurring theme in all Islamic artwork (architecture included) is the use of a circle. To Muslims, this symbolizes the circle of unity. The center of this circle is symbolic of God and the city of Mecca. From this circle, several regular polygons can be developed by the intersection of lines stemming from the centre with the circumference. What is so striking about this art form is the use of proportions, the ratios between the side lengths, and diagonals of regular polygons. These ratios are what make Islamic art so aesthetically pleasing and in the wise words of Aristotle- “maintain the just measure”. Fourfold Patterns Fourfold patterns are fairly simple and are created by connecting points of intersection to each other. In these types of patterns, the √2 proportion has been utilized. Moreover, the octagon, a shape associated with the eight directions of space and is a symbol of the divine throne, is derived from these divisions. An example of a Moroccan tile using the Ocoton Fivefold Patterns With fivefold patterns we see the use of the golden ratio: This is what makes the art near perfect. Sixfold Patterns Sixfold pattern Sixfold patterns are very robust. They make use of the hexagon, a shape which can tesselate. The hexagon also contains √3 proportions and is a shape that resembles the Circle of Unity. The shapes and number of divisions across all forms of Islamic art draw deep connections with the Islamic culture itself. This subject of geometry, although mathematical, is known as “Sacred Geometry”, as it connects art, math, and nature in these abstract ways that contort the mind, similar to the turning of a compass. Now, I cannot finish this article without showing some of my creations. Here I have created a four-fold design. Looking at the template, there are several other patterns that can be created, but here I have focused on a design surrounding a square. If this form of mathematical art intrigues you, and you want to dive deeper into its allure, the classical works of M.C Escher and Charles Gilchirst, which elucidate further on the interfaces between geometry and art, are immersive starting points. The story of tessellations profoundly used in Islamic art and architecture, is merely a small reflection of a much grander intersection between two disciplines that otherwise would be rendered disjoint and distinct. However, in the hopes that I have established this truth in this work, it can now be said that mathematics lives in art, and vice versa!
- Arbitrary Rotated Ellipses: A Mathematical Mystery?
- By Rohan Joshi I recently researched into quite an interesting way to find the equation of a rotated ellipse. An ellipse is characterized by the equation: Where a is the length of the semi major axis and b is the length of the semi minor axis. A rotated ellipse is exactly what it says, its appearance being self-explanatory: In this article, I attempt to derive the formula for an arbitrary rotated ellipse: an ellipse with semi-major and semi-minor axes of lengths a and b rotated by an angle θ. What’s unique about this approach is that it looks at the ellipse from a 3-D point of view, while using concepts from simple harmonic motion. Let’s take an arbitrary ellipse: Observe the terms highlighted in yellow. This can be interpreted as the intersection of two 3-D graphs. Now we do something unusual. We assume that the function z(x, y) describes the potential energy of an object. The force field associated with this potential is: The eigenvalues of this matrix are -2b^2 and -2a^2 respectively. Their corresponding eigenvectors lie on the lines y=0 and x=0 respectively. The force field above implies that if a particle is placed on one of the eigenvectors, then it undergoes simple harmonic motion. Therefore, simply rotating the potential energy graph will not change the nature of the motion, only the direction. So, even if the graph, and consequently the force field, are rotated by an angle θ, the eigenvalues will remain the same, while eigenvectors change. In fact, we can actually derive how this happens. The rotated eigenvectors will be perpendicular, and it’s important to realize that the eigenvectors will be along the semi-major and semi-minor axes themselves. Thus, the lines containing the eigenvectors will becomes y=tanθx and y=-cotθx. The eigenvalues will remain the same (-2b^2 and -2a^2 respectively). The rotated z(x, y), which I’ll call z’(x, y), will represent a new force field F’. Why have I assumed the components of the force field to be linear? Because the rotated graph is still a paraboloid, with quadratic terms. Differentiating the same, would provide an equation in linear terms. The matrix in (1) will have the same eigenvalues mentioned before, but with different eigenvectors. Thus, This will give us 4 equations in α, β, γ and δ. The solutions of these equations are: As expected, β=γ, as the field must be conservative. Then, we can use the facts to derive z’. Thus, This is the function z’(x, y), which represents the rotated paraboloid curve. Now its important to realize that the graph above (z’(x, y)) is simply the original z(x, y) rotated. This means that we simply have to equate this function to z=a^2b^2 to find the equation for our rotated ellipse. So, the equation of an ellipse rotated by an angle θ becomes: So that’s it! That’s the equation for an arbitrary rotated ellipse. The derivation is a little convoluted, but in the end, it leads to a rather fascinating and mathematically comprehensive result!
- The Enigmatic World of Antimatter: Unravelling the Mysteries of the Universe's Mirror Image
- Vipran Vasan Matter is all around us. Tables, chairs, books, and laptops are all made of matter. The typical definition of matter is anything which has mass and volume, however, there is something known as antimatter. As the name suggests, it is the opposite of matter in terms of charge and spin. Other parameters such as mass and other quantum measurements remain the same. The theorization of antimatter originated with Schrodinger's wave equation. It is a mathematical equation derived to describe the energy and position of an electron in space and time. The Schrodinger Equation also defines something known as the wave function. It is the probability of certain measurements being true for a quantum particle. For example, the position and spin of the particle. In this equation’s original form, if we consider the solutions then we end up with negative measurements such as negative energy and negative probability density. Schrodinger's Wave Equation, wherein H is a Hamiltonian Operator: Hψ=Eψ When Paul Dirac came into the picture, he tried to solve these two problems because algebra simply allows for multiple positive solutions. Physicists have been working with the positive solution the entire time, but since both these values provide the same result, antimatter should theoretically exist. The Dirac formula was derived in 1928, becoming the first ever antimatter theorization: Figure 1 - Dirac Formula which is a modification of Schrödinger's equation. This was later proved by the observation of a positron particle in a cloud chamber by Carl David Anderson in 1932. Since then, we have been observing antimatter in either the upper atmosphere of Earth or a particle accelerator. These observations have led physicists to discover the cosmological hyperreality that exists when antimatter meets matter. When antimatter meets matter, they undergo a process known as annihilation. Annihilation does not mean complete destruction but instead just that the matter is converted into another form of energy which is released in gamma rays. The energy released is directly proportional to both the matter’s mass based on . To put the enormous amount of energy released into perspective, if one kilogram of antimatter and one kilogram of matter were to annihilate each other, it would release 3000 times more energy than the Hiroshima bombing. This is why scientists want to search for more antimatter to use their energy, though due to its asymmetric distribution (elucidated below), its findings in research are scarce. That is when one of the largest mysteries in cosmology comes into being. During the Big Bang, there should have been an equal amount of matter and Antimatter created. However, now after 13.7 billion years, antimatter is barely found. This is called the “The Matter-Antimatter Asymmetry Problem,” the cause for which is still unknown to us. This predicament has led modern physicists to obtain useful applications of antimatter that make it worth searching for. Antimatter is extensively used in the medical field at PET (Positron Emission Tomography). Upon injection, a small amount of radioactive substance travels to the site of tumor or cancer while annihilating the electrons in our body. The trail of gamma rays released is read by sensors and a perfect map of where the disease has spread is fabricated. Furthermore, annihilation can be used for powering rockets due to its tremendous amount of energy released in interstellar travel. Later research suggests positron application in making fusion reactors more effective. If one can observe antimatter and understand it as an entity, then we could obtain insights of fundamental laws of physics and even discover new physics beyond the standard model.
- Huygen's Coupled Harmonics: Entangled Across Space
- By Rohan Joshi A coupled oscillator consists of two simple harmonic oscillators which are connected, or “coupled” by an intermediary force or potential. In such a system, the oscillation of one particle depends not just on one potential, but two. One of the most basic examples of a coupled oscillator is the following mass — spring system: As shown above, the two masses are connected not just to their own springs on the far left and right, but also through the means of a “mediator” spring at the center. Therefore, it is easy to see that the state, or position of mass m1 affects not just the force on itself, but also the force on mass m2. For example, if m1 is shifted to the right, then not only is there a leftward force pulling it to, well, the left, but the compression of the central spring contributes as well. Further, because the middle spring is now compressed, m2 also experiences a rightwardforce. It’s easy to see how the whole system oscillates not just by virtue of the left and right springs, but also the central springs. Another popular example of a coupled oscillator is the coupled pendulum. It is basically a set of pendulums, generally two, connected by a string or a wooden plank (or any material, for that matter). As seen above, this simple setup is the most common “coupled oscillator” that can be formed — it’s extremely simple to make. In fact, Christiaan Huygens had famously observed the phenomenon of synchronisation in pendulum clocks connected by a wooden beam in the 17th century. Two pendulum clocks, starting at different positions, seem to eerily “synchronise” with one another, being either in phase or out of phase. However, Huygens findings were before Newton published his Principia, so we shall be turning to Newton to help us find explicit time — dependent solutions to a coupled harmonic oscillator, such as the spring above. TIME — DEPENDENT SOLUTIONS To start off, we shall make a few simplifying assumptions. First, that the edge springs in image 1 have the same spring constant k and that the blocks have the same mass m. The central spring will have a spring constant K. x1(t) and x2(t) are the positions of block 1 and 2 with respect to time. Therefore, k = ω^2m, where ω is the natural frequency of the uncoupled oscillator. K = Ω^2m, where Ω is the natural frequency of the central spring when connected to only one of the masses. Now, the equations of motion, after all simplifications have been made, are as follows: These two can be simplified into a second order matrix equation. It turns out that solutions to this equation are of the form Here, e1 and e2 are the eigenvectors of the above matrix corresponding to eigenvalues λ1 and λ2. Now, the eigenvalues and eigenvectors are as follows: Therefore, the full solution is Which can be further simplified to Their graphs are: Position-time graph ^ Kinetic energy-time graph ^ As shown in image 4, when one oscillator is at its peak energy, the other is generally at its bottom/trough. This is because the two oscillators transfer energy to each other via the intermediary spring. One extremely beautiful thing to realize is that, unlike a normal oscillator, this simplified coupled oscillator has not one, but two natural frequencies corresponding to the square roots of the eigenvalues, and as such a coupled oscillator can have multiple resonant frequencies. This is especially useful in strings, which can be thought of as multiple connected oscillators, and therefore has many resonant frequencies. LIMITING CASES We shall now explore limiting cases of the oscillator, specifically the two cases ω << Ω and ω >> Ω, which correspond to strong and weak coupling respectively. Strong coupling: In the case of ω << Ω, the solutions simplify in the following ways: This doesn’t look different from our explicit solution, but a graph makes it much clearer. This is a graph of the positions of both strongly coupled oscillators. Now, this is nothing like image 3, in that this one is much more sinusoidal. Further, the two sinusoids seem to be out of phase with each other, nearly πradians out of phase. Further, it turns out that the individual phases δ and ε don’t impact the phase difference between the solutions. Therefore, in this limiting case in which the middle spring is much, much stronger than the ones on the side, it will effectively “control” the two masses, pushing them back and forth out of phase. This observation is like the “synchronization” Huygens observed. Weak coupling: In the case of ω >> Ω, the solutions are: These solutions are pure, uncoupled sinusoids, which is expected in case of this extremely weak coupling between the two masses. The graph is: As is visible, the two solutions are sinusoids. However, one difference here is that the phase angles α and β do impact the overall phase difference, as shown in the image below, where the phases are arbitrarily set so that both waves are in phase. GENERIC OSCILLATORS In fact, an oscillator doesn’t even need a spring to be coupled, just a coupling potential. Consider the coupling potential Φ(x1, x2). Because the motion is simple harmonic, it turns out that the trivial solutions x1 = 0 and x2 = 0 are, in fact, stable. Therefore, for small values of x1 and x2 the functions and can be expressed as: Now, because Φ is a potential field, it follows from Newton’s 3rd law, that: Therefore, one can now define the energy of a generic coupled oscillator: Now we introduce a slightly modified Euler — Lagrange equation, using energy instead as we now have an expression for total energy E: After using all equations above, we get the equations of motion as: We will now define the second partial derivative of Φ at (0,0) as seen above as K. Therefore, we get: These equations of motion describe the generic coupled harmonic oscillator and include the strength of coupling K of the oscillator. As this article concludes, I would like readers to consider this fact: if one looks at the potential energy of only one mass in a coupled oscillator, essentially writing it in terms of two variables: x1 and t, where the time dependence comes from the fact that x2 is written explicitly in terms of time, one realizes that when energy is lost, it is either stored in the spring or in the other mass. But what about momentum? When the first mass loses momentum, it obviously goes to the other mass, but looking purely at the first mass, it would seem as though, in a time varying potential, its momentum was “hidden”, in a way. So, my last question: can one “hide” momentum like one can hide energy?
- An Analysis of Forces on Stable Points
- By Rohan Joshi Math and physics are two subjects so entangled with each other, that its hard to tell which topic relates to one subject or the other. Take today’s topic-stable points. Of course, this applies to math, where stable points can be defined as points on a function where the slope is zero, but the second derivative is positive. In the context of physics, it simply refers to points where the force on a particle is zero, so the velocity doesn’t change. In this article, we will explore stable points in 2-dimensional force fields. Let’s get started. The first thing we want to do is to define an arbitrary conservative force field: A quick recap on conservative forces. These are forces with a potential associated with them. In a conservative field, the work done on a particle in going from A to B depends only on the endpoints A and B and is thus path independent. One can also say that these forces conserve the total energy of the system. Our approach begins by writing out F in terms of its individual components. This field is represented in Cartesian coordinates. Then we’ll assume that the origin of this system is a stable point, and finally make a linear approximation of the individual components of force around that point. As you can see, if we assume that the points x and y in a vector field F are extremely close to the stable origin, we can represent F as a matrix vector product. Now the matrix K has 2 important properties. Now what do we do? Do we solve equation 1? No. That methodology would be unnecessarily tedious. Instead, we can simply analyze the matrix vector product to understand how a particle moves near a stable point. The first thing we realize is that because of the properties of K, a particle moving near a stable point would spiral inwards. This is because both partial derivatives are negative, which implies that the force vectors cause an inward acceleration. Here’s a representation. Above is a sample vector field with a stable point at the origin. As you can see, in this vector field, the arrows tend to point inward, and the force on a particle will cause it to spiral towards the origin. Now we get to something beautiful about equation 1. It looks a lot like the equation for a 1-dimensional simple harmonic oscillator but extended to 2 dimensions. Indeed, you can see in the figure above that there are points where the force arrows point directly inwards to the origin, which cause the particle to execute simple harmonic motion, and not spiral down. There are two lines in the figure along which a particle can execute SHM if initially placed there. Unsurprisingly, these coincide with the eigenvectors of the matrix K, and their respective eigenvalues. In fact, we find that F = kr, and as you can see, where k is the eigenvalue corresponding to the eigenvector r. as you can see, k must be negative for SHM to occur. For the graph above, the eigenvectors lie on the lines y=x and y=-x, and the respective eigenvalues are -3 and -1. So, if we want to generalize this observation for any matrix K, do we solve for individual eigenvalues? No! we use the important fact that the product of eigenvalues of any matrix is the determinant of that matrix. And we know that, near a stable point, the eigenvalues of a matrix are both negative, so the product, the determinant, must be positive. Thus, we get: We can rewrite the inequality (2) in terms of the potential energy U, so that we get Now this is actually a theorem in mathematics, known as the second derivative test! It says that if this inequality is true, then the point (0,0), or any point where this inequality is being tested at, could either be a stable point (which is what we’re interested in for this article) or an unstable point. So, we have just proved a mathematical statement using physics! But of course, that wasn’t the point of this article. The point was to analyze how a particle moves in 2 dimensions around a stable point in a potential, a point where no force acts on an object. We found that if a particle is placed at a point very close to the stable point, it will tend to spiral towards the origin. more importantly, we also found that if a particle is placed along an eigenvector, then it will execute simple harmonic motion along that line.
- Optimizing Prime Editing: Novel Genetics Against Life Threatening Disease
- By Rishikesh Madhuvairy For decades, genetic diseases had been inevitable. It was the natural bane of human existence, occurring simply due to structural inadequacies in the human genome. However, one might think: “The human genome in itself is the building block of an organism. It’s the cement needed to consistently, and precisely construct and reconstruct the building of a living thing”. Not wrong in the slightest. As human beings, we have witnessed the dawn of our life to the current breaths we take, without even knowing what had constituted our development over so many ages. It was indeed the ability of our genome to store information that helps us grow; To uniquely identify us; and above all, to biologically define our existence. Although we believe that we are indeed the crown of creation, we are certainly not inherently perfect. In fact, the human genome is in practice, the most 'imperfect perfect' biological structure that exists in an organism. Gene material is composed in our DNA. When a mutation makes changes to that material, the instructions given to human cells are altered, likely for worse. The mutation can result ideally from a pathogenic intervention in the chemical structure of DNA proteins, or simply the wrong amount of genetic material itself, such as an abnormality in chromosome count. Such marginal changes to the genetic composition of a cell, tissue, or organ, gives rise to diseases like Anaemia, Down Syndrome, and Liver Fibrosis. The issue at hand is that needless to say, we as humans are incredibly weak, in the grand scheme of things. But as a species, we are also incredibly intelligent. And so what was the most earth-shattering breakthrough that curbed these diseases for the better? As many know, CRISPR. CRISPR is the pioneer of gene editing, and is contributing to forming the infant genomic industry for genetic engineering in the future. The scientists who developed CRISPR analyzed how bacteria used genetics to evolve and ward off viral infections, and replicated the same using technology, in human cells, to boost their immunity, engineer stronger hereditary traits, and an amalgamation of similar facets. But in order to gain a deeper understanding about CRISPR technology, we must understand the Cas9 enzyme. CRISPR Associated Proteins, or Cas, are the main protein structures used for genomic editing, arguably more fundamental in qualitative comparison to prime editing. The function of a protein that constitutes a strand of DNA, is dependent on the characteristics of its amino acids. DNA consists of multiple nucleotides, the contents of which are nitrogen bases that are reordered to make the nucleotide unique (Adenine, Guanine, Cytosine, and Thymine). A codon is a sequence of 3 or more nucleotides, that dictate the function of the amino acid making up the protein. Amino acids are uniquely added to nucleotide chains when a cell undergoes protein synthesis. This occurs when genetic instructions are separated and copied to a single strand RNA (the messenger), through the formation and translation of the amino acids. Every time large protein chains are formed, the end-to-end physical contacts that exist between linking amino acids is known as protein-protein interaction. Codons play a role here, as a unique type of codon, called the stop-codon will assemble connecting nucleotides to the main protein itself, and therefore provide the cell with instructions to not undergo protein-protein interaction. The hypothesis proven is that this effectively stops the synthesis of the DNA. How can is this incorporated into novel gene editing? The Cas9 enzyme optimizes codons to act as genetic scissors, and halt the copying of instruction sets for a forming DNA, subsequently breaking the chemical bonds between the linking proteins. The Cas9 enzyme would use base pairing features, to cut off matching nucelotides with RNA, thus allowing the remaining protein to chemically bond through protein-protein interaction. This changes the structure of the DNA, and therefore changes the behavior and the instructions assigned to the cell. Therefore, this means that overall, the conclusion drawn is that CRISPR technology can recode a sequence of cell instruction sets. By changing the protein’s characteristics, the cell’s entire nature changes. This is the very same endonuclease present in genetically evolved bacteria. Thus, if they can perform these operations and minimize the risk of a viral disease, then it is worth hypothesizing that those genomic traits are reproducible in human beings, under a specific trial selection of varying phenotypes. All we need are the scientific foundations of genetic instructions. In some sense, the fundamentals of the same are just like binary data on a computer. Current CRISPR technology has rather been speculated to be “iffy” nonetheless. There have been several incident biomedical reports stating different allegations on CRISPR engineers, stating that it is “unethical”, with unintended consequences. There is still no clear line drawn on the extent to which this technology can be used, which opens up possibilities of intentional mutations to living cells, and genetic blending for strong super-human genes. It’s the closest we’ve come to supernatural. Introducing prime editing: A novel subset of gene editing that helps reform the discoveries scientists have made about Cas9. Prime editing specializes in guiding the Cas9 enzyme to the right genome for rearrangement of nucleotides, as well as serving as an RNA template that usually copies sequences into a genome, in regular CRISPR editing. CRISPR prime editing is more technologically advanced than CRISPR genomic editing, as, in simple words, it allows for more precise and adaptable genetic changes. This reduces the risk of an artifically induced genetic disease, and is therefore more remedial in nature. How does our existing knowledge of Cas9 go hand-in-hand with prime editing? Well, there is a special element in prime editing that makes its genetic editing process a lot more accurate than conventional CRISPR genome editing, and that’s the prime editing guide — pegRNA. The Cas9 protein serves the same function as before: breaking down DNA strands and repairing them with a different genetic structure. However, pegRNA acts as a “guide” RNA that directs the Cas9 protein to a specific location in the genetic composition of the cell, enabling the DNA to be reshaped in a manner that targets a very exclusive part of the strand, that is prone to disease. The pegRNA oversees this through the reverse-transription of the RNA strand (often done through a component of the endonuclease substrate, called the transcriptase). Prime editing does not involve gene copying, as one can see above. Instead, it involves single-strand RNA production. This is much quicker and more dependable than general CRISPR editing, as it opens up the scope for minute insertions, deletions, and alterations to the DNA’s nucleotide structure. Genomic editing could still be used directly to a much larger/broader part of the DNA strand, therefore manipulating genes possibly in a dangerous way. Hence, prime editing has played a substantial role in this day and age, to reduce the repetition of unnecessary cycles of genetic duplication, and instead, use precise modifications on existing genomes. This contributes to prime editing technology being relatively cheaper as well. A normal pegRNA in prime editing would require only 1 pegRNA strand for a precise protein synthesis, wheras CRISPR needs multiple. Prime editing has been an innovative breakthrough not only for pioneering a new pathway of discovery in multiple fields, but for redefining evolution as we know it. The capabilities of creating viable products out of genetic engineering in prime editing are endless, and the investments into these global healthcare research projects give valuable insights on how important prime editing is to major companies furthering its cause in the quarternary sector; The cause of strengthening the human genome against diseases, bearing a minimal risk. Prime editing is currently being used by highly reputed genomic technology firms, incdluing GenScript, CRISPR therapeutics, and Beam, among others. The reason for this is the change in consumer/societal attitudes towards revolutionizing genetic healthcare in the biomedical industry. Prime editing is being used all over this industry, from treating the central nervous system of a Huntington’ s Syndrome patient with gene repair, mitigating the impacts of Klinefelter’s Syndrome on middle-aged adults exposed to the imminent risk of rheumatoid arthritis, or even strengthening lymph-nodes against microbial contaminants in chemically toxic air. If these are the scientific breakthroughs that prime editing is providing us in the 21st century, then it goes without saying that the future applications of prime editing are so far reaching, and yet uncertain. Currently, one specific future endeavour that I’d like to cite is cancer. The ability to reverse mutations of cancerous tissue is therapeutically efficient, and moreover a step towards curing such a disease. Using prime editing techniques to alter or reverse the mutations of cancer cells, the same concepts, if built on further by engineers, could establish a solution to cure cancer as a malignant disease altogether. Similarly, there are numerous other possibilities where people could be saved from autoimmune diseases just through one pair-up in their genes. CRISPR Prime editing, if publicized to the research world scientifically speaking, can help achieve such an oustanding collection of goals. At the end of the day, as the infinite fabric of knowledge unfolds over time, we can only conceptualize, and then anticipate that the story and technicality of prime editing simply goes to show us that the more we discover, the more that is left to discover. If used to its purpose, with accuracy, precision, non-duplicacy, delicate gene alterations, protein synthesis, and enzyme locating agents, prime editing would emerge as the newer technology of today’s world. With one exact nucleotide altered, an entire bacterial disease could harm you negligibly.
- From Classical to Quantum: Investigating Quasiparticles
- By Rohan Joshi Quasiparticles are a relatively new theory in classical mechanics that serve to analogize quantum mechanical concepts with those in classical mechanics. So, where do we start analyzing this phenomenon? Let us start with defining what a quasiparticle is. Assume that one has an infinitely long chain of atoms, or simple harmonic oscillators, in one dimension, linked together by strings. This isn’t different at all from our coupled infinite-oscillator system studied in the previous article. The only difference here is that the system is unbounded and therefore a single vibration of even one oscillator will travel all the way to the end of the spring before bouncing back. Further, since the chain is infinitely long, that vibration will not come back — it will keep travelling down the line. If we give a slight push to one of the ends of the chain, say to the first mass on a spring, we will be giving it some initial energy that remains constant throughout the motion of all the masses. Soon, of course, the second mass will start oscillating with the same amplitude as the first, and then the third, and then the fourth, and so on. The energy is getting transferred from particle to particle as they’re linked by the springs. Therefore, at any given moment in time, we will see that one mass/oscillator is vibrating with maximum amplitude, and then the next one is, and so on. What does this have to do with particles? Well, a particle is a phenomenon in physics, just like a wave. So, this “compression wave” which was described in the previous paragraph is doing something. What is it doing? One major feature of particles is that they have momentum and energy, and that they can transfer those quantities to other particles. What is happening in our system? Energy is being transferred between masses on springs, and there is, indeed, a medium that is transferring that energy — the spring itself! Each compression and extension of the wave, each vibration of a mass can therefore be considered to be occurring under the influence of a “particle” of sorts that is going around the chain giving and taking energy from the blocks. We will now try to model this particle, and we shall see that it gives rise to wondrous links between classical and probabilistic mechanics — but that shall be reflected upon later in this article. First, we shall move from the discrete world to the continuous world. In this scenario, the “particle” can be felt as a region in which the atoms/medium have maximum energy. Our goal now is to find a function that gives us the displacement y of each individual particle as a function of distance x along the oscillator string. For now, we will also assume that our system is bounded and of length l. In this scenario, we will need to set a few conditions for y(x). Our first, of course, is that at any given point x0, at which the “crest” or maximum vibrational amplitude is at at any given point, we need our function to be maximum. Second, since our “particle” is localized, we want our function to vanish as our distance gets larger and larger. Lastly, we need the integral over all space of our function to converge, because the total energy of our system remains a constant. Is there a function with all these properties? It turns out that yes, there is! It is…the Gaussian function, or, in more colloquial terms, the normal distribution. While it shouldn’t be surprising that a continuous set of infinite data will lead to a normal distribution, it is worth reflecting on what has happened. As we moved from discrete to continuous, the vibration of particles became less of “they happen in only one place” to “all particles are affected by the vibration of one, and therefore each individual displacement can be modeled as a y(x)”. Isn’t that beautiful? Getting back to the matter at hand, the wave function y(x) now becomes Where A is the amplitude of the most energetic vibration taking place at any given time. Now, this expression is obviously a guess. However, we cannot underestimate the importance of just guessing in physics, because, as I have said before, when something reminds you of something else, it most likely is that something else. Now that we have a y(x), we can find the differential maximum energy dE of a mass of the continuous oscillator, dm, as follows: Here, ω is the natural frequency of each spring. If the chain of atoms is assumed to have an approximately uniform mass density λ=m/l, where m is the total mass of the system, then our expression becomes The graph of this is below: As you can see, the function is the normal distribution, and it shifts with time as the oscillation moves ahead. Further, it is concentrated at “peak” that also continually shifts. Thus, the total energy of our system will be given by the sum of all individual energies. Pretty compact, isn’t it? The l vanishes, giving us our answer independent of the length of the system. Now, if the average stiffness of our material is given to be K=ω2m, then the energy can be expressed as Therefore, this is the total amount of amount of energy carried by this “particle”, which we shall now call a “quasiparticle”. Why a “quasi” particle? Because it isn’t, of course, really a particle. It just shares certain properties. Indeed, the vibration of lattice atoms due to some external force, such as sound, can be considered as occurring due to the transfer of quasiparticles, or a “phonon” in this case, between individual atoms, giving and taking away energy from them as it moves. TRAPPED PHONONS Now that we have been introduced to the concept of a quasiparticle, let us now look at the phenomenon of standing waves from a new perspective. You may have learnt of them in the form of waves on a string. I shall try to explain them from the perspective of coupled oscillators and quasiparticle energies. So, let us look at our setup. Once again, we have a line of continuous oscillators, but this time bounded on both sides and therefore having a fixed length l. Such an oscillator system, of course, can have infinitely many normal modes and resonant frequencies. Let us look back to what we uncovered in the last article — that in our first mode of vibration, our continuous y(x) has one extreme, that in the second it has two, and so on. If our system is forcibly oscillated by a driving force at some resonant frequency ω, then our associated eigenmodes are such that some part of the system will move in one direction with some amplitude, and the other parts in other directions, oscillating back and forth across space. So, we shall make another guess for y(x) in this scenario. If A is the maximum possible amplitude of an oscillator in the system (dictated by its input energy, of course), then Here, n corresponds to the nth mode of oscillation. We can check that this works. Why? Because if we plug in n=1, our graph is half a wavelength long, from x=0 to l. For n=2, the graph has two wavelengths, and so on. So, what is the significance of this? It tells us that, if our system is “trapped”, then my system can only oscillate in very specific modes. Because it is continuous, it turns out that that only an integer number of half-wavelengths at very specific frequencies can exist in this system. This is unlike a discrete system, in which arbitrary frequencies can be used to induce motion. Further, we can think of each eigenmode, again, as a quasiparticle, or a phonon, trapped inside a finite box. For different external vibrations, it can have different vales of energy. How? Let us do the same analysis as before and see where we end up. Now, y is obviously not just a function of x. It is also a function of time t. Therefore, we can write the following: Here, ωn is the frequency corresponding to the nth mode of oscillation. We can simplify the above as Here, k=nπ/l is the wavenumber of the oscillator. The above equation looks eerily like the equation for two separate waves moving in opposite directions superimposed on each other – the prerequisite for a standing wave to exist. We only have one last bit to prove. We know that ωn/k represents the speed v of a wave in the material. But since we know that k is quantized in multiples of π/l, ωn has to be quantized as well for the wave speed to be the same for all modes of vibration. Therefore, we have come to another wonderful conclusion — that the eigenfrequencies ωn are also quantized as ωn, or integer multiples of the lowest, or natural frequency. We can thus infer that, given quantized resonant frequencies ωn , one can create phonons in matter with distinct energy states depending on the frequencies. But what is this total energy? Let's find out. Again, we consider the energy differential dE at any given time. Plugging in what we know, Now, integrating over the domain [0, l], we get Pretty neat, isn’t it? What’s even better is that the above equation tells us that even the energy of the oscillator/quasiparticle is itself quantized! So, what is the relation to quantum mechanics here? Firstly, a misconception must be cleared up about a particular entity being quantum. It is true that quantum mechanics generally plays a much larger role in the behavior of subatomic/fundamental particles that macroscopic phenomena like sound waves. However, this is because one major tenet of modern particle physics is the existence of fields and the emergence of particles from those fields. The same thing is happening here — a phonon is arising from the continuous field of atoms all around it — though discrete, at large enough scales a system such as this can definitely be considered to be continuous and uniform. And, as we have just seen, trapped oscillating systems can exist only in integer normal mode frequencies — something known as quantization. Quantum mechanics isn’t just about electrons — it is about any system in which energy can exist in only fixed states. Thus, what we have just studied is very, very analogous to the quantum harmonic oscillator. However, there are still a few more things we can play around with to fully complete our study standing waves...