There is no definite answer to that, but astronomers have looked into it for decades and so far have seen no evidence of any boundaries or wraparounds. Not even a trace thereof. So it's either actually infinite or so big it is functionally infinite.
Kinda complicated question. It's physically impossible to be sure of the infinity, but it seems to be the case. The curvature of the Universe is at least very close to 1, with an error less than 2%. But how can we get an zero error? We can't.
And this also presupposes an overall spherical shape, not e.g. a Poincaré's dodecahedral space, with the observable Universe contained within a "face".
Given that the observable universe is smaller than the entire universe, what kind of "evidence of any boundaries or wraparounds" would you expect to see?
The observable universe is smaller than the entire universe because the part of the universe that's farther away from us than a certain distance is receding from us faster than c, which makes it physically unobservable by any possible method we know of.
(Although due to the rubberband phenomenon I think the horizon of obserbability is not exactly at the distance where space recedes from us at c. I'm not exactly sure how far it is, theoretically speaking.)
Well, I'm not the one looking, but there has been astronomical research into finding evidence of curvature, e.g. in the form of repeating regions of the sky (in particular, CMB patterns). There were promising results, but as far as I know, nothing of the sort was confirmed in the end.
Will a perpetual motion machine ever be invented? The most prominent example I can think of is the drinking bird.
However, that has a fatal flaw because the water evaporates over time. Even if you were to have it in a 0°C vacuum chamber. Pendulums don't work due to friction even in a vacuum. Anything to do with magnets won't work due to the magnets eventually wearing out. The only thing I'm reasonably sure of is that it would involve using gravity as that's the only constant force on earth.
This, kind of.
There were also some practical experiments involving counterbalancing gravity and sun's heat energy, but that's wonky and does not work continuously for obvious reasons, let alone produce anything useful. At least the clock tells time.
Perpetual motion machines come in many kinds. The first one is a machine that violates conservation of energy, moving faster and faster without any external source of energy. For something like this to be possible, it would require a drastic overhaul of physical theories. Every viable physical theory postulates that elemental interactions respect conservation of energy, and there's no evidence that it's violated.
Machines of the second type, which can extract more useful energy from a system than what would be allowed by the second law of thermodynamics, thus reducing the entropy of the system, and is what is commonly called a Maxwell demon. These ones are widely considered impossible, too, but for a different reason than the first ones.
The biggest difference is that, in the second kind, you can still write down some sensible looking equations that show it working. The biggest question is whether these equations can be implemented in a realistic system at all. Mathematically speaking, the second law of thermodynamics works only for ergodic systems, and there are systems which are non-ergodic, so we could circumvent the second law by engineering the system to be non-ergodic.
Perhaps the most simple example I know is the so-called ellipsoid paradox, where you set up some mirrors to focus all light from a black-body, and then never reach thermal equilibrium. In the article I linked, the authors perform some numerical simulations, and show that, under the realistic assumption that the black body is not a point but has a defined size, the ray focusing does not work perfectly, and the system reaches thermal equilibrium after all.
This is very general problem. Although you can come up with models that suggest the 2nd law can be violated, they are ultimately all approximations, and for all cases we know, using a better approximation makes the system ergodic again, and brings back the 2nd law, which is a rather interesting way to understand why the 2nd law works so well.
I'm going through Goldstein's Classical Mechanics text, and I had a question regarding the connection between exact differential equations and holonomic constraints.
Why exactly is it that if you have a differential equation of constraint, that if those differential equations form a pair of exact equations (perhaps with an integrating factor), that the constraints are holonomic?
Build a man a fire, warm him for a day,
Set a man on fire, warm him for the rest of his life.
It's been a long time since I had a look at analytical mechanics, but I suppose it's because if you differentiate a holonomic constraint, which is always something like f(x,y) = c, if I remember correctly, you end up with an exact differential equation?
Holonomic constraints are in the form of:
f(p1, p2,...; q1, q2,...) = 0
So I guess if you had a system of differential equations that formed your constraint set, then if they were exact (optionally with an integrating factor), you could recover f. That makes sense. Thanks.
Build a man a fire, warm him for a day,
Set a man on fire, warm him for the rest of his life.
This is more like a shitpost, but interesting nonetheless. I came across this image:
It turns out that the apparent paradox of three Magnemites having a mass 10 times greater than an isolated one is possible if we take into account quantum mechanics and special relativity. Intuitively, it can be viewed as follows. When two systems interact, they can have a binding energy as a consequence of this interaction. It's because of this energy that molecules do not break apart, for example. At the same time, the relativistic mass-energy equivalence E=mc^2 tells us that mass and energy are more or less the same thing, so if we have a lot of binding energy, it can manifest itself as mass.
The classic example is the QCD binding energy, which makes the proton much heavier than its constituent quarks, for example.
So, I know people posed this challenge as a joke, but science can explain the higher Magneton mass. In the Pokemon universe, the magnemites probably have a "Magnemite charge" which introduces a binding energy between them, and gives rise to this extra mass when they combine to form a Magneton :D
Yup, the binding energy in QCD is different from the one in traditional nuclear physics. It happens to create mass because of some peculiarities that only happen in relativistic field thories.
Does this have anything to with the question of why nuclear fusion releases so much energy?
Also, on another tangent: Does a charged battery weigh more than a depleted battery?
Yes, and yes, but the effect is very slight (though it's much more noticeable as a chemical difference, charged batteries are more bouncy). It's also part of the reason why in-spiraling black holes release so much energy (in the form of gravitational waves).
When you bring two objects together that form a very strong bond of some kind, the energy of the bond is released somehow. Whether that's electric (chemistry, in the form of photons typically), strong force (nuclear fusion), or gravity (black holes coalescing).
Build a man a fire, warm him for a day,
Set a man on fire, warm him for the rest of his life.
One very straightforward way to see why nuclear reactions release so much energy is to simply look at the measurement units.
Chemical reaction energies are usually of the order of a few electron-volt, while in nuclear physics the reaction energy is usually measured in mega electron-volts, which are one million times greater. The reason for this depends a lot on how far you want to go to explain the strong interaction, but ultimately it boils down to measuring stuff, even at the level of particle physics, where you study QCD and other stuff. Ultimately, you have to fit some parameters to make the experimental measurement come out.
Perhaps the simplest explanation is de Broglie wave-length and relativity. We know that momentum is inversely proportional to the length of an object. In a crude approximation, we can assume that relativistic energy is proportional to momentum, and we find that energy is inversely proportional to length.
From this, since the nucleus is around a million times smaller than an atom, you would expect that reactions that involve nuclei to be a million times more energetic than chemical ones, which involve the atom as a whole.
I suppose this slightly relates to physics, at least technically speaking:
I recently purchased a 120-hertz display (to replace my older 60-hertz one). The https://www.testufo.com/ site shows three images scrolling horizontally at the same speed, but the top one updates 120 times per second (if you have a 120-hertz display), the middle one updates 60 times per second, and the bottom one 30 times per second.
I was puzzled why the 60-hertz scrolling image looks visibly blurrier than the 120-hertz one. If you pay really, really close attention you might just barely see the 60-hertz image moving in a slightly less smooth manner than the 120-hertz one, but this effect is very hard to notice. The major difference between them is the clear difference in blurriness. If they were randomized, it would be very hard to tell which one updates at 120Hz and which one at 60Hz by looking at the smoothness of the motion, but it's extremely obvious by looking at the blurriness.
But why does it look blurrier? I don't think the web page itself blurs the image.
Thinking about it for a while, I think I figured out the reason: It's probably mostly caused by pixel response time.
Pixel response time is the average time that it takes a pixel to change color. This display is categorized as having a 4ms pixel response time. I wouldn't be surprised if in a 1ms display the 60Hz image would look much sharper.
So, what I'm thinking is that in the 60Hz scroll, the picture makes bigger jumps. Due to pixel response time, the image remains visible both in its previous position and its new position at the same time, for a little while. It may be just like a millisecond or two that they are visible at both positions, but still enough to notice.
In the 120Hz scroll the pictures are of course also visible at two positions at a time for a millisecond or two, but the difference is that the distance between these two images is half of that of 60Hz. That's why the 60Hz version looks blurrier: The image copies are farther apart from each other, making the result look blurrier than in the 120Hz version, where they are closer together.
One way to test this theory would be, as mentioned, test with a 1ms display (which has otherwise the same resolution and dimensions). I don't have one, though.
This is an interesting phenomenon I saw being discussed recently: the Spring Paradox, where suspending a mass in a system of springs then cutting one of the springs can actually cause the mass to RAISE because the spring constant has been reduced.
Link to video
I think this is the generalized form of the paradox to other phenomenon: https://en.wikipedia.org/wiki/Braess%27s_paradox
But interesting to see it being demonstrated using pure physics!
Zeno's paradoxes (yes, plural; it's not just one paradox, but a collection of thematically related paradoxes) essentially deal with the question of how it is possible, in the actual real universe, for infinitely many things to happen in a finite time.
The most known and archetypal one is this: Achilles starts running towards a tortoise far ahead, while the tortoise is walking in that same direction. After a certain amount of time Achilles will have reached the point where the tortoise was at the start. However, by this time the tortoise will have moved a certain distance. Thus Achilles will then reach this new point where the tortoise was at that moment in a finite time, but by that time the tortoise will have again moved a certain distance. And so on and so forth. Achilles will need to run an infinite number of these ever-decreasing gaps before he reaches the tortoise.
It's easy to dismiss this thought experiment as funny but silly. Of course we can calculate the time it takes for Achilles to reach the tortoise and that's it. The infinite number of gaps is compensated by their exponentially decreasing length, so the total length of all gaps is finite, and thus can be traversed in a finite time.
However, many philosophers contend that the math doesn't actually explain the actual paradox. It gives us the answer of what the finite traversal time is, but it doesn't explain what the original paradox is actually wondering. And that is: How is it possible, in the actual real universe we live in, to perform an infinite number of steps? This is not the imaginary universe of mathematics, with its zero-sized points and infinitely perfect circles. This is the real world, where infinities don't exist.
In other words, the paradox is actually as relevant today as it was 2500 years ago. The math doesn't give an explanation of the paradox, it just tells us the measurements of it.
I am thinking that, perhaps, what the paradox is actually asking is whether the universe is continuous or quantized. Maybe quantum mechanics solves the paradox somehow? Maybe Achilles does not, in fact, perform an infinite number of actions because he can only move at planck length distances at a time, and will stay at each point an absolute minimum of one planck time unit before... I don't know... teleporting to the next planck unit position?
This would make the number of steps taken by Achilles finite, because he can only be at the total distance divided by the planck length positions in between. On the other hand, it raises the question of how moving from one planck position to the next happens.
(It may also be that I am talking complete BS here because I honestly and literally have no idea how planck lengths and planck time affect the universe and the position and movement of objects in it.)
Some personal thoughts on the topic:
It seems to me that the crux of the paradox lies in the conflict between 'infinities don't exist in the real world' and that the real world infinity in the Zeno example emerges intuitively and doesn't doesn't seem to violate any physical principles.
I would first say that I understand 'infinities don't exist in the real world' to mean that infinity can never be the result of a measurement. This almost seems true to me by definition, as any numerical measurement will necessarily yield a numerical answer and infinity is not a number (it's certainly not a rational number at any rate, which, as far as I know, is what all numerical measurements end up being). At best, you may measure something to be extremely large realtive to other related things and say that is is 'practically infinite', but this of course is not the same as actually 'measuring an infinity'. This doesn't logically exclude the possibility that there may exist an infinite number of things in the universe, but just says that we would never be able to conclude this to be the case experimentally, i.e. by actually counting stuff. The universe could therefore, at least in principle until demonstrated otherwise, be infinite in size and infinitely divisible without contradicting the idea that infinities don't exist in the real world in the way I've described here.
Having said that, I don't really think there is a paradox, because there is no measured infinity. When I move from A to B, who's to say how many 'events' have occurred? I could reasonably say one event has occured, namley the event of moving from A to B. Of course I can increase the number of events indefinitely by taking ever more intermeidate points, operating under the definition that an event is the motion between two points that I've measured, but even discounting various physical limitations that prevent us from being able to measure arbitrarily small distances or times in practice, we are still never going to measure an infinity, because whatever we ultimately end up measuring is going to be some number. In mathematics, like you said, the paradox doesn't exist because we can deal with the relevant infinite sums.
So even if spacetime is ultimately infinitely divisible, I don't think there's any problem. By the way, as far as I know the Planck length is just the legnth scale at which gravity becomes unignorable for elementary particles. Since a good theory of quantum gravity doesn't exist yet, it's also the legnth scale beyond which we don't really know what's going on. It's not that the universe is some kind of lattice where the Planck scale is the smallest unit of length (feel free to correct me if there do exist theoreis like this, as I'm definitely not very well read on this topic).
I suppose the infinity comes from the question: "When you finally reach the tortoise, how many times have you traversed the distance between you and the position of the tortoise at that moment?" In other words, how many times did you traverse the gap between you and the tortoise (if the tortoise had stayed stationary each time you started to traverse a gap)?
In theory you could mark the point at which the tortoise is currently, and every time you reach such a point, you mark a new point where the tortoise is, and there would be infinitely many points.
Alternatively, "how many times did you share a position with the tortoise that it was previously in during your travel?"
Because of my work I have had to acquaint myself with geographic coordinate systems. I find the subject more interesting than I thought.
One could naively think (and probably the vast majority of people do) that geographic coordinates are quite simple: The poles are unambiguous, fix the zero meridian somewhere, and then just use degrees of latitude (north-south angle) and longitude (angle around the equator). Simple and clear.
But it's not that simple. Why? Because continents move, that's why.
In numerous applications it's useful, even necessary, to know the exact coordinates, to the centimeter level, even millimeter level, of particular points eg. inside a city, or overall inside a country. The problem is that if you used global coordinates for this, what you measure today exactly will be off by several centimeters in a year's time, because of continental drift. This is problematic. Because continents move many centimeters, even tens of centimeters per year, if you needed the exact coordinates of a point at the millimeter level, you would need to be constantly adjusting and re-measuring, which isn't practical.
The zero meridian used in global coordinates suffers from the same problem: Where should this zero meridian be? And, inevitably, the zero meridian also moves because of continental drift. Other continents move in different directions relative to it, so their distance to the zero meridian is constantly changing.
This is why continents and individual counties have local coordinate systems, which are fixed to the soil of the country, rather than the entirety of the Earth. The local coordinates move with the country, as continents drift. This suffers significantly less cumulative errors over the years. It's also the reason why if you ask the corresponding city or country department for the exact coordinates of a particular point, they will usually give it using a local coordinate system rather than universal coordinates. The local coordinates are more useful.
An even more problematic measurement is that of height. In some situations height is actually even more important than latitude and longitude. Many applications need to know also height at the centimeter, even millimeter level. The problem is: What should height be relative to? This is actually not an unambiguous or easy question.
Relative to sea level? Problem is that sea level isn't actually constant. It's at different distances from the center of the Earth at different parts of Earth. Global coordinate systems (such as GRS80) express height relative to a hypothetical ellipsoid that approximates the surface of the Earth. Problem is, again, that the Earth isn't actually a perfect ellipsoid (even ignoring mountains etc), and that approximation is just that: An approximation. It can differ from actual sea and ground level by quite a margin in some places. In fact, if you use the GRS80 ellipsoid as your height system, you'll get seemingly paradoxical situations where rivers flow uphill, according to GRS80 heights, because gravity on an ellipsoid is a bit wonky like that.
This is why, once again, most countries will use a local height system as well. This local height can differ by several meters from a global coordinate system like GRS80 (for example the current standardized height system in Finland differs from GRS80 by something like 16 meters or so). They usually also fix the rivers-flowing-uphill problem.
One attempt at a completely neutral and universal coordinate system is the ECEF XYZ coordinate system. Rather than use degrees of latitude and longitude, it just uses Cartesian coordinates: The origin is at the center of the Earth, the Z axis goes directly through the north pole (which is unambiguous) and the X axis goes through the (internationally agreed) zero meridian. The unit of distance is 1 meter. All positions anywhere on, in or above the Earth can be thus expressed as x, y and z coordinates.
Of course this suffers from the drift of the zero meridian, so its coordinates can only be used relatively temporarily for anything, and cannot be really relied on staying the same long-term, especially if used for local coordinates.
But this got me thinking, and finally after this gigantic wall of text, comes the physics question: Does the gravitational center of the Earth stay the same, or does it change over time? The poles in this system are the two points on earth that stay stationary as it rotates. But do these two points also stay the same over time, or do they drift?
One of the classical original "objections" to special relativity is that of the beam of light moving on a surface. I think the original thought experiment is a circular screen of very large radius around a rotating spotlight, with the light spot on the screen seemingly moving faster than c. A more modern variant is beaming a laser onto the surface of the Moon and moving it around, the spot moving faster than c.
I have always found this objection quite simplistic and naive because anybody can understand why it's not valid. What I don't understand, however, is the refutation pretty much always given. I think it was originally given, and it's blindly repeated to this day. This refutation talks something strange about "information" and how it can't move faster than c, and yada yada.
I don't know why they came up with that strange argument about "information". The explanation is a lot simpler than that: Nothing is moving from point A to point B on the surface. There isn't anything, not a particle, not energy, nothing, that's moving from point A to point B. The appearance of movement is just an illusion that our brain creates, it's not something that's actually moving on the surface, from one place on the surface to another. It's merely light hitting the surface on a spot and reflecting back to the viewer. Any appearance of "movement" is just a visual illusion in our brain, not something actually physically moving on the surface. It's no different than if you flashed the spotlight for a millisecond to one direction and a microsecond later to a completely different direction. You'll see this spot of light suddenly "jump" from one place almost immediately to another, but nothing moved between those points. Any sense of "movement", or "jumping from one place to another" is just an interpretation of our brains.
I have no idea what "information" has anything to do with this. It's just much simpler to say "nothing is moving from one point to another on the surface".
I'd say that "you can't move information" explanation, while in some sense might be technically correct, is a rather non sequitur explanation, and a misleading one at that. It kind of implicitly admits that something is indeed moving from point A to point B on the surface, but due to some weird magic you can't transfer information between those points that fast, as if the thing moving on the surface magically moved faster than information. This is just all a huge red herring. "The limit of moving information" has nothing to do with this. Nothing is moving on the surface. I don't even understand why this is even brought up.
Yet, you still hear this strange explanation to this day, even from brilliant science popularizers like Neil Degrasse Tyson.
I don't think the concept of information in relativity even existed back in the day. Some people tried to dribble the fact that no physical entity should exceed c and create situations where one could obtain some information from a distant place faster than a signal could have traveled such distance. But even those situations were proved to not carry "information".