Posts for amaurea

1 2
5 6 7
16 17
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
Mr. Kelly R. Flewin wrote:
Too many Yoshi Wing stage exits. I mean it got REALLY bad, to the point that when I saw it being executed, I kept fast forwarding to the end of the level every time.
Wow, I guess there's no accounting for taste. I found the fantastical strategies the authors used to set up these Yoshi Wing exits to be among the most interesting moments in the run, in particular gems like in Vanilla Secret 1.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
DarkMoon: Sounds like you uploaded a deduped video to youtube. The audio stutters and skips during scenes with no motion, which is where deduping would kick in.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
Comparison with the previous run: Link to video You may notice that there are sometimes two ghosts on the screen in this comparison. This happens in stages Mario completes twice. I do this to ensure that at least one of them is the one it makes sense to compare with. The previous version of the script would try to figure that out itself, but wasn't very good at it. The numbers at the top of the screen are the number of frames between this run entered the current room and the ghost did (line 1) and an estimate of the number of frames saved since that point (line 2).
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
It seems your goals are, by priority: 1. Collect all upgrades 2. Take no damage 3. Damage as few enemies as possible 4. Fastest completion I'm guessing 2 and 3 are why you don't go under that obstacle in the intro stage? I haven't watched all of the run yet, but why are some boss fights a lot slower? Is it due to your goal of not taking any damage? Did you really have to kill the jumping shield guys in the shoryuken stage? You seem to lose a large amount of time simply due to lack of precision in your movements. For example, in the ascent to the boss refight teleporter room, you lose over 10 frames even though that section has no enemies or damage that you have to avoid. This seems to happen every time you have to scale a wall.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
This should be on-topic for this thread: Link to video
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
cwitty wrote:
I forgot the main reason for the web service: to prevent cheating. You rely on the players sending each other their input at the same time. I don't see how to keep one player from hacking their "external program" and consistently sending their move a tenth of a second after receiving the other player's move, with a bot that picks a response based on that move. (A cheating player behind a fast internet connection would look just like a non-cheating player behind a slow internet connection.)
A protocol that should eliminate the possibility for this kind of cheating:
  • Both players prepare their inputs, and compute a hash based on them
  • Players exchange hashes
  • After receiving the opponents hash, each player sends his actual inputs
  • After receiving the opponents inputs, compute their hash, and verify that it matches the hash that was sent.
By sending a hash, the opponent has committed himself, but not revealed his actual inputs. He can't change his inputs based on seeing yours, as that would produce a different hash. There is the possibility of brute-forcing the hash to recover the inputs, but by transmitting frames in groups of 6, that possibility is mostly eliminated, since the number of bits in the input would be about as large as the number of bits in the hash).
Post subject: This time SwordlessLink is wrong
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
The previous time SwordlessLink and the rules were in conflict, I agreed with SwordlessLink, and so did most others, and eventually the rules were changed. This time, however, I agree with the rules. Verification movies are essential for TASVideos' integrity, and as demonstrated by Kejardon, it is possible to cheat extremely subtly using movies with embedded savestates. I have no problems with the category "glitched newgame+" or its execution here, though I don't know enough about the game to be a qualified judge about the latter, and I dispute ais523 broad statement that such a category by necessity has to be boring. Anyway, I can't vote yes for this until a verification movie is provided.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
ZanasoBayncuh wrote:
Okay... I may have been wrong, watching the run here I've come to the conclusion that this type of category - which I desperately want to exist - needs to nail down a very specific understanding of what a "glitch" is. So far I've still seen phasing through immobile enemies and I continue to see running super fast by pointing at a downward angle repeatedly - which can't possibly be an intended effect. These are glitches, no?
Yes, they are. I would also be interested in seeing a fully glitchless run - partically because that would provide an exact measurement of how much time the glitches save. But it is hard to formulate a precise list of what is a glitch - what the developers intended and what they did not. How about things like moving quickly by using damage boosting, or stopping on a dime, or gravity jumps, etc.? So quite a lot of subjective judgment would be involved in such a category. On the other hand, many other categories are pretty subjective too, like those for fighting games. So I guess in practice we would find out how acceptable such a category is when somebody actually makes a submission.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
AnS wrote:
I also support "Executes Arbitrary Code", which is rather modest, but that's good, because the shown result (the picture and music) is not as extremely mindblowing as it could be in theory. So, while I think this movie should get a Star, I wouldn't try to oversell it by calling with pathos-filled names like "Total Control" or Matrix-related stuff.
Personally, I don't think this run needs to be modest. But I was thinking of "Executes Arbitrary Code" as a general goal which would be possible for other games, and not just a term for this single run. Basically, instead of aiming for the fastest time, such a run would try to impress by totally reprogramming the game, and would be judged in terms of technical merit and entertainment. If that is possible in battletoads or super mario world, then one could make "Executes Arbitrary Code" TASes for them too. Obsoletion could be handled the same way we do for fighting games etc. now, i.e. "is the new run more impressive than the old one"?
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
To be honest, most physicists do not think singularities ever occur at all, but that they are unnatural artifacts that appear when extrapolating general relativity too far outside its region of validity. For example, standard cosmology predicts that the density and curvature should approach infinity at every point as we move back in time (the big bang singularity), but the only thing we know for sure is that they were very high in the past, not that they were infinite. We need a theory of quantum gravity to fill in the parts of the picture where general relativity gives up and produces singularities. PS. It is no problem for an infinitely large universe to expand. Expansion can be defined locally as every small volume element being replaced by a slightly larger volume element. The expansion of the universe does *not* refer to it expanding into some void outside it, which is the mental model that I think is responsible for most of the confusion about infinite universes expanding.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
I second the suggestion for the category name "executes arbitrary code". This is the standard term for this kind of exploit when it happens in modern computer systems, and it spells out exactly what is being done. I think this term is clearer than "total control". Edit: Here is a wikipedia article on the term: https://en.wikipedia.org/wiki/Arbitrary_code_execution
Wikipedia wrote:
It is the worst effect a bug can have because it allows an attacker to completely take over the vulnerable process. From there the attacker can potentially take complete control over the machine the process is running on.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
creaothceann wrote:
Hack: A modification of a game that was not programmed explicitly with that possibility in mind. Mod: A modification of a game that was programmed explicitly with that possibility in mind.
I agree. This is exactly what my connotations with these words are.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
The standard term is ROM hacking. Modding may be more popular in other game modification contexts, but not when it comes to the console games that dominate here.
Post subject: Re: SMW TAS Comp
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
Warp wrote:
HiipFire wrote:
- No discussing strategies/route's with someone else - No collaberating with someone else on your run unless specified otherwise
How do you expect to enforce those rules?]
Perhaps he's expecting people to be honourable?
Warp wrote:
- no cheating blablablabla
What exactly constitutes "cheating" in this context (especially assuming that the entries would be emulator keypress files; it's difficult to "cheat" to create them)?
These movies will be starting from a savestate from the looks of it. There is plenty of room for cheating there. For an extravagant example, see Kejardon's "proof" of the green chozo room in Super Metroid, where he hacks the smv savestate gain control over the game loop, after which he can do anything.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
p4wn3r wrote:
However, the authors of the article are emphatic saying that we have always observed galaxies receding faster than light. The problem is that when you set a(t) = exp(Ht) you're extending the expansion of the dark-dominated era to the end of time, this is a good approximation for short time intervals, but like in the snail in the rubberband analogy, it can take enormous amounts of time for the encounter to happen, so this approximation breaks apart. In the context of the Lambda-CDM model, at no moment the Hubble sphere is an event horizon, it's irrelevant if we're passing through a dark-dominated era, since galaxies outside the sphere can reenter it when this era ends.
Yes, that's true, but in the Lambda-CDM model you can't break out of a dark energy era, so if the concordance model is correct, the universe will become ever more dark energy dominated. As you say, light from previous eras will still be reaching us during this time, and so we will still in theory be able to observe objects receding superluminally (albeit enormously redshifted). But the question I thought Warp was asking was whether any object receding at speed x *today* will be possible to see in the furture, not whether we currently can see any objects that were receding at speed x when their light was sent out. And the answer to that is that as the universe becomes increasingly dark matter dominated, the speed x approaches c from above.
Warp wrote:
Of course the rate of expansion affects whether light reaches the other object or not. That's not what I was asking. I was asking about "the largest recession speed observable could be anything between 2c and 4c." I don't understand why that depends on anything else then the recession speed itself. Why isn't the maximum observable recession speed a certain fixed value?
Imagine a universe which first expands ridiculously fast, so that some object is moving away from you at 100000c. At this point the object emits light towards you. If the expansion continues at the same speed, this light will not be able to reach you, but if the universe later stops expanding, or starts contracting, then the light would have no problem reaching you. So as you can see, in theory there is no limit to what recessional velocities could be observed. In practice, though, the universe doesn't expand like that, and the article p4awn3r linked to looks at realistic scenarios for the expansion, getting 2-4c for the maximal recessional speed.
Warp wrote:
Also note that in my original question I'm talking about a hypothetical situation where the expansion of the universe is such that the two objects are receding from each other at a constant speed (that's larger than c). (I suppose this means that the expansion of the universe would have to be asymptotically decelerating for that to happen.)
To have two objects receding from each other at constant speed requires decelerating expansion, as you say. v = HaX' = const => da/dt = v/X' => a(t) = 1 + v/X'*(t-t0) when we require a(t0) = 1, and where both v (recession speed) and X' (comoving coordinate of object) are constant. The propagation of light is then: X(t) = int(c/(1+v/X'*(t-t0) dt, t=t0..t) = cX'/v * log(v/X'*(t-t0)+1). This expression is not bounded, so no matter which v you choose, you will always be able to observe it (highly redshifted) if you wait long enough in this case. Again, physically this is because the universe is slowing down, letting the light eventually start to make progress and reach us, even if it was initially carried away from us.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
Warp wrote:
p4wn3r wrote:
So, by looking at the graph, since z is approx 1 when v=c, you would observe light at half its original frequency. And the largest recession speed observable could be anything between 2c and 4c.
Thanks. But now I'm puzzled why the answer to the second question depends on things that intuitively seem irrelevant, and why it isn't an exact number. (After all, speed c isn't dependent on anything, and the metric expansion of the universe is just about changing geometry. Why would any of this depend eg. on the age of the universe?)
Didn't you read my explanation, Warp? The question you are asking is if light emitted at some point will *ever* reach us at some point in the future, so of course it depends on how the universe is going to expand in the future. If you imagine your snail on a rubber band, it has a huge effect on the progress the snail makes if you stop pulling on the band, or if you start letting it contract, for example.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
I would TAS a tribe of chimpanzees, and see if I could use them to beat humanity to Mars. Goals: Avoid contact with humans - build everything (infrastructure, tools, etc) from the ground up. Establish a self-sufficient base on mars as quickly as possible. I think this would require careful planning and be quite difficult to pull off. And if developing a space-capable industrial society in the African jungle without being detected by humans is unfeasible, going for weapons first might be the best choice.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
p4wn3r wrote:
amaurea wrote:
Right now, the universe seems to be transitioning from a phase where a = (t/t0)^(2/3) to one where a = exp(H(t-t0)). In the former, the integral evaluates to 3ct0([t/t0]^(1/3)-1), and in the latter, it becomes c/H*(1-exp(-H(t-t0))).
Davis-Lineweaver wrote:
The myth that superluminally receding galaxies are beyond our view, may have propagated through some historical preconceptions. Firstly, objects on our event horizon do have infinite redshift, tempting us to apply our SR knowledge that infinite redshift corresponds to a velocity of c. Secondly, the once popular steady state theory predicts exponential expansion, for which the Hubble sphere and event horizon are coincident.
That's a nice, pedagogical paper you found p4wn3r. I haven't read all of it yet, but it appears to agree with what I said above: as long as the H is decreasing we can see recessional velocities higher than c, but if H is constant, c is the limit. The steady state universe is one (obsolete) example of a case with constant H. Inflation is another. And a dark-energy-dominated universe is a third. The latter is the case we seem to be approaching right now, and if our current understanding of the universe (LCDM) is correct, we will be thoroughly dark-energy-dominated in a few tens of billions of years.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
Warp wrote:
amaurea wrote:
If the two objects are receding from each other faster than the speed of light, they cannot send light to each other.
That's not what I have understood, and it's precisely a big source of confusion and misunderstanding.
Let's do this properly, then. The scale factor a(t) indicates the ratio of the size of the universe at time t to the size now, at the current time (t0). Using this, we can define a set of co-moving coordinates X = x/a, where x is the normal distance. The nice property of these coordinates is that they compensate for the expansion of the universe - that is, galaxies that are moving away from us only due to the expansion of the universe will have a time-varying x, but a constant X. Let us calculate how light moves in terms of these co-moving coordinates, assuming light is emitted at the present time (t0). We then get X(t) = int(dX/dx * dx/dt * dt,t=t0..t) = int(1/a c dt, t=t0..t). So how far light has gotten by time t depends on exactly how the universe expands (a(t)). Right now, the universe seems to be transitioning from a phase where a = (t/t0)^(2/3) to one where a = exp(H(t-t0)). In the former, the integral evaluates to 3ct0([t/t0]^(1/3)-1), and in the latter, it becomes c/H*(1-exp(-H(t-t0))). What we are interested in is how far light can get, since that determines whether light emitted from a far away galaxy can ever reach us (remember that we are calculating co-moving distances here, so expansion is already taken into account). We see that in the first case, we can get arbitrary large numbers for X by inserting large numbers for t. This means that for a matter dominated universe (the first case), there is no limit to how far light can get if we wait long enough - a galaxy will never become invisible. This is not that surprising, since a universe in this phase is slowing down, and hence, if we wait long enough, expansion will no longer be a problem. However, in the second case, a vacuum energy dominated universe, there is a limit to how far light can get. If we let t -> infinity in that expression, we find X = c/H. This is the most realistic case, actually, since the universe is becoming more and more vacuum energy dominated. To answer Warp's question, then, we need to relate this to the expansional velocity. An object a (normal) distance x away is exapanding away from us with velocity Hx = HaX. If we are considering light emitted right now, then a(t0) = 1, so we simply get v = HX. And inserting the expression for the distance for the furthest light that can ever reach us in a vacuum-energy dominated era, X = c/H, we find v = c. That is, galaxies that are expanding away from us faster than c cannot communicate with us. So the conclusion is still the same as previously. Physically, the mechanism is that in a given time period dt, light moves c dt towards us, but space between us and it is expanding by Hx dt. If Hx dt > c dt, then the light will not have made any progress, and will get further and further away. If H is constant (the vacuum energy case), then it will never arrive at all. The only reason why light eventually arrives in the matter domination case is that H decreases with time, and eventually falls low enough that Hx < c.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
Speaking of candidates, sometimes it's nice to see which ones you actually agree most with - the result might be surprising: http://www.isidewith.com/
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
Warp wrote:
1) If two objects are receding from each other slightly faster than c (due to the metric expansion of space) and the first object sends light to the other, what's the color of the light when it arrives? (I'm thinking about redshift here.)
If the two objects are receding from each other faster than the speed of light, they cannot send light to each other. However, they might still be able to see each other if they were closer together in the past. Imagine a pair of galaxies that are pulled further and further apart by the expansion of the universe. At time t1, they are finally separated far enough that they are receding from each other faster than the speed of light. Any photons sent out after this by galaxy A will never be received by galaxy B (unless the expansion speed of the universe changes). But light sent out before t1 will be received, but after a large delay. This delay grows longer and longer as we approach t1 (because the distance the light has to travel grows while it is travelling), at which point it becomes infinite. This means that from one of the galaxies' point of view, the other one will always be visible, but it will be a frozen, fading, ever more red-shifted past image of the galaxy, which never updates to reflect what happened after t1.
Warp wrote:
2) I think there's a cutoff point for how fast the two objects can recede from each other before light can reach one from the other. In other words, if the two objects recede faster than a certain speed, light will never be able to travel between them. What is this speed?
The speed of light, c.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
CoolKirby wrote:
andymac wrote:
What's really appalling is that we have a published run on this site that is being rivaled by real time runs.
Totally agree. The best thing to do is work on and finish this run so we can stay ahead of real-time speedruns. After the new record is established, there will be plenty of time to implement small improvements in a v2 run.
Let me also second this. Publish early and often: That is the best way to ensure that something gets published, and also may inspire more people to TAS SM64. I also think this is the fastest way to reach a high quality run in the end . And as CoolKirby says, not submitting V1 effectively means that you think the current run should represent TASvideos instead.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
I agree with the advice to learn two languages at the same time, and I recommend C and python as the pair. C is a good low-level language, and is what is used to implement python, which is a good high-level language. Not only will learning both cover a wide range of abstraction; there are also high-performance extensions to phython (like cython) that allow you to write tight, performance critical sections of your python files in inline C. So this pair makes sense in my opinion.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
If it were me, I would wait for v2 to incorporate that improvement. "Release early, release often" makes sense in open source software because having something out there increases exposure, makes it possible for others to submit their own improvements, and decreases the chance of the project disappearing before anything is produced. I think the same arguments work just as well for TASing. And there is no shame in saying "Known improvements: Towards the end of this TAS I discovered a possible improvement in the first chapter. This may save about 10 seconds, and I'm planning to include it in a future improvement of the TAS". Other reasons for going through the whole TAS for each iteration is that you may hit upon other snags in the remaining parts of the game, and it is best to know about them now rather than hitting them after a restart, forcing you to restart yet again. But of course, it is your work, and the most important thing is that you have fun doing it. And perhaps redoing is less work than I imagine it to be.
Experienced Forum User, Published Author, Former player
Joined: 2/19/2007
Posts: 424
Location: UK
jlun2 wrote:
alec kermit wrote:
People down-vote TAS's of (officially licensed) games just because they're bad games lol <_<? The more official games TASed the better IMO, as long as the TAS itself was done well. No need to limit the game library because you think the game is bad, somebody that grew up with it may think it's fun, it does have a unique theme.
So, you want runs of these boring, but official games published?
Yes, I do. Publishing these runs will not make it more difficult to find TASes that will appeal even to non-fans of the game, as we mark those with stars, and have a separate page listing all of those.
1 2
5 6 7
16 17