I thought that's exactly what GR predicts. Because of the spacetime geometry inside the event horizon, particles have no other direction to go than the singularity at the center. (Even moving forward in time makes the particle go towards the singularity.)
I don't buy that without further explanation. Total gravity always pulls towards the center of mass of the object/gaseous substance/whatever. For example, any point on or inside the Sun will have a gravitation towards the center of the Sun, because that's where the total gravitation points to (and is the reason why stars collapse inwards when their fusion reaction is too weak to maintain the shape of the star).
Even if at the beginning the entirety of the energy in the universe was totally evenly distributed, the total gravity would nevertheless still point to the center of mass of all this energy. (Of course I'm a complete layman in physics, so there may well be something I'm not understanding here.)
I don't quite understand that, but "gravitational time dilation" gave me an idea for the reason why the Universe did not simply collapse right back into a singularity: It expanded faster than gravity waves could move (AFAIK gravity does not propagate instantaneously, but at velocity c or something along the lines), and hence an event horizon did not have time to form, as the energy escaped faster than the gravity well could propagate.
That actually makes sense.
You don't need to be "outside" the Universe (if that concept is even possible) for the problem to happen. Just take a portion of the Universe at its initial stages. The majority of the energy in the Universe was certainly within its own Schwarzschild radius even while the Universe itself was already bigger than that (even if by a small margin).
Besides, I don't think you need any external observer for a mass to collapse because of reaching the critical density needed for that to happen. It's not like the external observer somehow triggers the collapse due to the observation.
It's a question of density. The density of my body is not even nearly large enough for it to collapse into a black hole (even though sometimes I can be pretty dense, ha ha!) There's a critical density to every body of mass, that if this critical density is reached, it will collapse. More precisely, the density needs to be such that all the mass is within the Schwarzschild radius of the object (this radius being determined by the mass of the object).
At the beginning of the Universe this density was certainly large enough (because all of the energy in the Universe was compressed into a sphere smaller than the Schwarzschild radius, and consequently the density of the matter was certainly beyond the critical limit).
I don't understand what this has to do with it. As said, it's just a question of density.
Don't many of the videos have a youtube version? Perhaps it would be preferable to embed those instead (and possibly give a link to the other sites as text).
(or i guess we could not embed the videos and just link to them but ignore that option ok?)
The published movie description pages load the player only on demand. I think it would be possible to do that with the gruefood delight list as well, although it might require someone adding the support. (But as said, this would still not solve the issue of not being able to stop the stream download...)
There's one thing that it's not at all clear to me about the Big Bang theory, and I can't find an answer to it.
There was a time in the beginning of the Universe when all the energy in the Universe was inside its own Schwarzschild radius. This would mean that the energy should have been incapable of escaping that radius, and instead collapsed back to a singularity (or whatever is happening inside a black hole). However, it expanded beyond this radius nevertheless. I don't understand how.
I understand that General Relativy allows the distance between two points in space (and consequently the distance between two particles) to grow faster than c, and hence it perfectly allows for superluminal expansion of the Universe (which is ostensibly still happening today, as the observable Universe is, as far as we know, smaller than the entire Universe), which is the currently established hypothesis of the first moments of the Universe (ie. the Universe expanded at an exponential rate at the very beginning).
I don't know, however, if or how superluminal expansion explains the energy of the universe expanding beyond its own Schwarzschild radius at the beginning. Is this the reason, or is it something else?
Btw, the current gruefood delight page is quite heavy to load because of all the archive.org flash players being loaded. Also, unlike the youtube player, the archive.org player has no option to stop downloading, which means that if you start watching one movie but decide to go to another one midway, your bandwidth will get clogged (at least if you have a puny 1Mbit connection like I do) and the only way to fix it is to reload the page (or to to the previous page and back).
It would be better if the player was loaded only on demand, if that would be possible. The problem of not being able to stop the stream downloading is unsolvable, though (because it's up to archive.org and not us).
I'd say the major problem with the run is that there's only a limited number of tricks that can be shown, and the author is forced to wait for a rather long time before completing a level, and hence the run becomes inevitably quite repetitive. If the level timer was significantly lower, or if there were fewer levels to complete, it may be more entertaining, but as it is, I think it's just a bit too long.
As mentioned previously there are 100% runs which waste time by collecting everything in the game.
I wouldn't say it like that. Having an alternative goal is not "wasting time". It's having an alternative goal. Those runs still try to complete the game as fast as possible while completing that goal, and the amount of wasted time is minimized.
They look the same to me as a concept. I mean, they may be technically different (HQx produces smoother edges while ScaleX produces more pixelated edges, and there are differences between the shapes of edges produced), but I think the relevant question is not whether you should use one filter which tries to smartly create interpolated shape edges or a different one, but whether you should use any such algorithm at all. I think the HQx poll answered that question.
The description of recommended movies on the front page appear to have been broken by the addition of streaming video icons (whatever you call the pictures for archive.org, youtube, etc).
Yeah, it's a bit annoying. Since it hasn't been fixed, I'm starting to wonder if it's actually intentional...
While the idea could work in theory, in practice it takes too much time for the timer to run out. Having to wait for the timer to get to zero takes too long, and the run becomes quite repetitive. If there were significantly less time to complete the levels, then this idea could perhaps work ok.
Another issue is that we already have several categories for this game.
Sorry, I have to vote no.
Maybe I'm being obtuse here (which wouldn't be really surprising), but I still can't understand what the big idea is with these monster-size encodes.
Playing 1920x1280 H.264 video requires so much processing power that even slightly older computers can have a hard time doing it. And this is assuming you are using an optimized player for that. Many of these videos are being watched with a Flash-based player, which isn't what one could call very optimized (if you compare CPU usage between eg. mplayer and the YouTube Flash player for the exact same video, there's a considerable difference). Also HD video tends to require more bandwidth than a normal-sized video (especially since people usually expect HD video to have a higher quality than a normal-sized video, ie. less compression artifacts).
The only argument I have heard for these monster-sized videos is that it reduces the effects of the mandatory colorspace reduction. However, if the typical screen resolution of a console game is something like 256x192, do you really need to make the video almost 8 times larger on both axes to reduce the colorspace reduction artifacts? (More precisely, 1920 is 7.5 times as large as 256.) Wouldn't something less be enough for that purpose?
it will say something like "Indeterminable" or something like that
To be accurate, it says "(unknown)".
Also the movie statistics page will skip movies with zero rerecords for the rerecord statistics. (This was originally made specifically so that movies with false or lost rerecords could be zeroed and thus skipped by the statistics.)
- I removed the F-Zero movie. Inclusion of the F-Zero movie was presumably due to being a very specific demonstration (racing game, one level only, no game end). However, there are other movies, such as Zanac (one level only, although game end), Minesweeper (one level only, trivial game end), and movies that play one mode (too many to name, specific mode, no real ending). Their inclusion is also arguable, but there are a lot of them. Thus currently, "very specific demonstration" is not a criterion.
If a game has a well-defined ending but the TAS doesn't go there, it's IMO very unambiguously a concept demo and should be considered as such.
If a game has no well-defined ending (eg. levels just repeat over and over ad infinitum) but the author has chosen an "ending point" by some rational and well-defined criteria (which is also accepted by the community), then it could be considered a "legit" TAS.
Only if quantum computers become a reality and affordable. Which might never happen.
(I can't say that finding the optimal input for a game is an NP problem, but it certainly sounds like it, and naive brute-forcing is very definitely an exponential algorithm, and you are not going to perform it on a run of any significant length in a reasonable time no matter how much computing power you have.)
So I'm kind of late to the party (no more cake!), but why did this run get three no votes?
They were so amazed at the sheer awesomeness of this run that they accidentally selected the 'no' option by mistake because they were still high from the experience.
JavaScript is not one of the most efficient scripting languages in existence (even though some web browsers have optimized the execution speed quite a bit in the last years). Browser-based JavaScript also was never designed for drawing graphics at high speed. Seems to be a rather suboptimal platform for that purpose.
Well, I suppose it's a project akin to those raytracers written in PostScript (which are thus completely runnable in a PS-based printer).
I think Google outright lies about the number of search results. Sometimes I try to access page 100 or some similar number by changing the URL accordingly just to see what sites are that far down in the results, and Google says there are no more results. This despite the reported hundreds of thousands or millions of results. Maybe it repeats results to just pad the numbers.
The reported amount of hits may be correct, but perhaps google will only send a limited amount results to the user to conserve resources or whatever.
a)For any point (x,y) and a polygon represented in the way described above by the matrix A, provide an algorithm that checks if (x,y) is inside the polygon or not.
That's a basic problem of computer graphics (and geometry in general). The basic algorithm is: Trace a line (eg. a line towards the positive x direction) from (x,y) and count how many polygon edges it intersects. If it intersects an odd amount, it's inside, else it's outside.
Algorithmically the problem can be simplified by translating all the points so that (x,y) ends up at (0,0). After that it suffices to determine which polygon edges intersect the positive x axis.
Determining if a polygon edge intersects the x axis is trivial (one endpoint has a negative y and the other has a positive y). Once you have determined that, you have to see if it intersects it on the positive side. This can be determined with a cross-product of the two vertex points of the edge (its sign will determine the answer).
Note that there's a small catch in that algorithm, which happens if a vertex point is exactly on the positive x axis (after the translation). You have to make sure you don't count the two edges sharing that vertex points as both intersecting the x axis. (This can be done either by testing if one of the vertex y coordinates is >=0 and the other <0, or alternatively by multiplying the vertex y coordinates by 2 and adding 1, especially if using integer coordinates, to force them to be unequal to the y coordinate of the tested point.)
b)Assume you have two polygons represented by the matrices A and B. The intersection of A and B is a polygon C (at least I think so... please provide a counter example if I'm wrong). Provide an algorithm that gives the elements of C in terms of the elements of A and B. If A and B don't intersect, C should be the empty matrix.
If polygons A and B can be concave, their intersection may produce more than one polygon. This is a much more complicated problem (but obviously not unsolvable).