Posts for p4wn3r

1 2
16 17 18
34 35
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I think you are now pulling things from your behind.
Guess what you are full of?
I was gonna reply until I saw those. I don't think it's possible to have a constructive discussion with someone who starts saying stuff like this to people who disagreed so little about C++, if you think C++ is a masterpiece of design, fine, but since we're at this point now, you do seem the textbook example of an annoying language evangelist.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Really?
Warp wrote:
If we are talking about programmatic optimizations
p4wn3r wrote:
Using factorization wheels is a mathematical optimization.
You compared it to programming optimizations, like cache stuff, and I said it isn't, because it makes the algorithm faster while being totally agnostic to the computer implementation.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
OK, my humble 2 cents. For the tl;dr: My overall advice is, if you want to avoid the learning curve of C++, or you need to write code for an architecture where no optimized C++ compiler exists, or you want to avoid the long compile times and incomprehensible C++ error messages, you should code in plain C. For all other situations you should use C++. For those who are more technically inclined: The claim that C++'s generic containers put some performance overhead compared to C is false for most of them. Streams are very slow and I never use them, and the boost ref counting pointers which are now part of the standard are noticeably slower than C pointers and even slower than some garbage collected languages, and if you do some ugly casts, your code might need RTTI, which reduces performance. Nevertheless, any experienced C++ programmer will know how to optimize that stuff when he needs to and I think C++ code is easier to optimize than C code. That said, though, C++ is so poorly designed that it's embarassing. I love C++, but most of the time I find myself using only object oriented C with a larger standard library, and I constantly have to remind other people not to do what seems to be natural. I honestly think it's impossible for a common programmer to really master C++, there must be at most 10 people who've mastered it, probably all in the standard committee. There are a lot of silly details like the difference between "typename" and "class" in template declarations, difference between static functions and functions in an anonymous namespace, using ios::sync_with_stdio() to speed up streams, make_shared() being faster than shared_ptr constructors, etc. These are the ones I can remember, people more experienced than me can probably list much more. Besides, there are some things which make no sense at all and do a lot of harm: * The cascade of constructor calls that may happen when you don't declare them explicit is so ridiculous that one wonders if there's a rational explanation for that stuff (there is, they realized it was a mistake after the first versions, but had to be backwards compatible). * Exceptions can behave quite unexpectedly when you link libraries compiled with different compilers. Many C++ experts recommend a much more strict approach to exceptions at module boundaries because of this behavior. It can be such a nightmare that a large company like Google simply chooses to ban C++ exceptions altogether. * The throw() signature for functions is so stupid that nobody uses it. * The keyword "register" and, before C++11, "auto" have the exact same meaning as whitespace. However, they are still there to make you confused. * Names of functions in the STL like fill() and distance() clash with pretty much anything. That means that you have to resort to C-style name conventions or just pepper the code with std:: or "using std::foo;" statements, which is extremely annoying. * Ultimately, even the designers of the language found out that writing namespaces inside deeply nested template functions is annoying, so they made ADL, which has an algorithm so complicated that I've met only one person who understands it (Bjarne Stroustrup, whom I met at one of his talks in Russia). Seriously, not even compiler writers understand it, some time ago I saw code that worked in some compilers and not on others, because the compiler got ADL wrong. * While C, like most languages, has context-free grammar, C++ has undecidable grammar. That means that the gcc parser needs hundreds of thousands of lines to parse C++, that C++ code has excrutiatingly long compile times, and that there'll never be neat IDE features like those of Eclipse for Java. Anyway, this should not give the impression that I think C++ is a bad language. It has its flaws, like any language, I just rant a little bit because the committee has not learned from its previous mistakes and in C++11 they continue to make the language more needlessly complicated while people still need "idioms" to do basic stuff and fundamental things like concurrent programming are still not implemented.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Warp wrote:
p4wn3r wrote:
Further optimizations can be applied, you can initialize all even numbers except 2 as composite and , start at 3 and use n+=2 instead of ++n, and that can be made even better using something called a factorization wheel, but that's more complicated.
If we are talking about programmatic optimizations, then one of the major problems with the basic algorithm is that it has poor cache locality if the bit array is much larger than even the outermost cache, and the algorithm can be made significantly faster by doing things in a slightly different order. However, that topic is beyond the scope of this thread, which is about math.
Using factorization wheels is a mathematical optimization. Notice that I didn't mention anything about the computer architecture. Using a step of 2 numbers removes 50% of the composite numbers. Using larger wheels generated by more prime numbers can shave off 95% of them. Although I don't think the algorithmic complexity changes, the improvement on the constants is huge.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
You do that for the same reason that you only go until the square root of the maximum value. When you start processing prime p, you have already processed all primes smaller than p. Any unmarked composite number n = r*s will have r and s greater than or equal to p. So, n is at least p². Because of that, the sieve is already correct for all values less than p². Further optimizations can be applied, you can initialize all even numbers except 2 as composite and , start at 3 and use n+=2 instead of ++n, and that can be made even better using something called a factorization wheel, but that's more complicated.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
At the featured movie page, "Read More..." can trim the movie description in the middle of a tag, leaving it malformed:
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
turska wrote:
In the specific case of this submission where Vault vs Moons and entertainment value aren't relevant to the verdict, the poll indeed serves little use. Howewer, for the vast majority of submissions that aren't controversial special cases, it is a very useful way for users to provide simple feedback on whether or not they liked the run. That's not to say you cannot voice your opinion of whether or not a submission should be published - you are welcome and encouraged to post your arguments on the matter.
The poll asks an irrelevant question when complicated decisions that would benefit most from seeing votes need to be taken. It may be sufficient if in the vast majority of runs you don't feel the need to ask people boring stuff like "is the run optimized well?", "do the goals make sense?", etc. However, I prefer to use it in the old and less dysfunctional way. Anyway, if it somehow becomes popular to publish TAS movies that are resyncs of previous movies, I have taken the liberty to make an algorithm showing how to net publications in tasvideos. It's largely based on MESHUGGAH's great four step tutorial. Some may consider it a dark art, but well, so is TASing!
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Scepheo wrote:
If you want to voice your opinion on whether or not to publish at all, the poll is simply not for you.
Exactly. That's why the poll is a joke.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Publishing as input resynching looks like a good third alternative, but that would be something so simple that's probably better to just update the publication text saying that "Author X redid this movie in emulator Y, getting a time of a:b:c, posterior attempts to obsolete it could use this time as reference". And to be clear, although I'm critical of the way that Masterjun exposes his Yellow submissions, I'm certain that he knows everything that's going on extremely well and that he managed to optimize it as best as I would have. But, seriously, this run is one minute of menu clicking and has only one non-trivial part that can be optimized using 40 lines of code. It's very simple to resync it. Further, imagine the publication text: "This run uses the same strategy as the previous run, but it uses a different emulator so luck manipulation is a bit different. But at least with the new emulator, Pikachu cries louder!". All this when Nach told me on IRC that VBA runs are still accepted if the games emulated don't have severe emulation flaws.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
ALAKTORN wrote:
p4wn3r, are you a VBA coder? you sound slightly biased
Was, some time ago. In the rerecording branch, however, I just made some commits trying to port it to Linux, but eventually gave up because there was little interest. VBA has its flaws on some obscure games (every emulator has), I just think that it should simply keep being updated, not deprecated. And hegyak, the runs in question are not using SGB. VBA has some ugly hacks to emulate SGB, but they have no effect if you're playing GBC.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Good question.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Tangent wrote:
From comments though, and at least one direct claim, people seem to think this run is meaningfully different in some way besides a slight resyncing it to a different emulator, again, which I find irksome.
I feel there's some extreme urge to obsolete older emus, people base this on accuracy when, at least for VBA, most arguments are simply FUD. Just look at how everyone bashed Mupen, and its SM64 movies could be verified. In the past, there used to be a clear policy that runs that differ only because of emulator differences aren't accepted, and that you should offer substantial improvements in the case of different emus. Nowadays it seems increasingly hard to get people in this site to agree that we shouldn't obsolete movies based on illusions of accuracy, and there's at least one admin post encouraging people to obsolete movies on that manner: http://tasvideos.org/forum/viewtopic.php?p=342110#342110
Masterjun wrote:
cool when did the vote question change to Do you want this movie accepted? ?
It was "Do you think this movie should be published?" before. The question as it currently stands is a joke. I'd find a video from a good stand-up comedian entertaining, not sure how that would help a judge determine if he should publish such video if it was submitted.
also lol @ not using a more accurate emulator which I totally didn't
That's right, you didn't. If you think otherwise, please link us to something that proves lsnes/bizhawk emulates this game any better than VBA. Your emulator does a lot better in sound, but it's just output stuff. You could just fix that in VBA and the published run would sync with no problems. There's a timing test that VBA fails that make some difference if you play awesome games like "Mary Kate and Ashley", but make no difference here. The timing in VBA was a lot worse in previous versions, such errors were fixed, and old RBY movies still sync if you make reset input compatible with the new version. The curious thing is that the only evidence I've come across for the claimed accuracy difference are these tests. It would be nice to know that much stuff that someone would consider critical for accuracy, like LCD screen rendering, goes wholly untested in those test sets, but let's obsolete old runs any way with emulators that have better sound. As for the difference in lag frames, I want to call attention to the difference of almost 90 seconds between the time shown in the vbm and the end of the encode of this movie: http://tasvideos.org/2824S.html This happens because VBA doesn't count frames that have no output on the LCD, but it outputs those extra frames into the encode anyway, so the encode ends up longer. I don't know how lsnes/bizhawk handle the timing, but the order of magnitude of the alleged extra lag frames could be because vba didn't count blank frames, but still would output the correct amount, and even then, no one measured the timing of the real GBC with sufficient precision, so we don't know if lsnes/bizhawk timing is correct either. It's definitely OK to criticize VBA's accuracy, but do it based on something solid so that the people who spent time improving it don't get upset. Just because a software is a tasvideos pet project that hacks a lot of independent cores together doesn't mean that it's better in any objective sense.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Don't the rules say to play the game in GBC if possible? http://tasvideos.org/MovieRules.html#ConsoleSpecificRules So, I think encodes such as these should be preferred, since they show the game played on GBC.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
goldfish wrote:
What I don't get is whether or not this run would be faster IRL, i.e. console-verified. If I have two GBCs, plug the previous Yellow input file into one and this into the other, and start them simultaneously, does this one get to the credits first?
Well, what would happen is that only one run would sync until the end, or none would (probably none). The great villain is timing differences between the gameboy LCD screen and the CPU clock. The RNG routine updates a 16-bit value 4 or 5 times a frame, so it probably uses timer interrupts to work, it gets the value of a hardware timer and updates the RNG. If because of some timing inaccuracies, the RNG routine runs before an LCD interrupt when it should run after, the value of the timer will be different, and will give a different RNG. Things like that occur rarely, but in a short movie like this, there will be around 16000 RNG updates and it's very unlikely that all of them will sync with the LCD's vblank. Also, iirc, the game uses the timer uninitialized at startup and there are 65536 possible values for it. It's hard enough to get the timer right at the beginning without the LCD desynchronization. What we know is: doing a brute force to manipulate the RNG in VBA gives 11 frames, in lsnes it gives 9. Since Masterjun shared his code, I dug out my old script: http://pastebin.com/kidvX2mw (I coded it when I was learning to program, it's a bit hacky, because I also used it to manipulate battles, but mostly because there were lots of bugs with VBA's lua engine that caused desyncs, so I had to register a function, use global variables to hold state, and run an autohotkey script to press f1 periodically to load savestates). That's why I think it's not accurate to say there were improvements, it's just statistical noise from switching to another core, in a situation that's extremely unlikely to be verified. The optimization strategy for both runs is exactly the same. The only motivation for publishing it is to change the emulator used, that's all.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
It looks like you want to obsolete the run on a newer emulator. Fine, you have the right to do it, but allow me to clarify stuff a little. Please don't say that you improved luck manipulation by 2 frames, I brute forced all methods and know that reset + 10 frames is optimal on VBA, and I reversed to find out that other input ways don't affect timing and have no influence. By definition of comparison, you have to be running on the same core to be able to compare the results. In this core you get delay because of extra lag, you can get away with it because you want to use a supposedly more accurate core, but I find extremely biased that you don't excuse the previous run for taking 11 frames, when you can do 9 simply because you're in a different core. You didn't improve the luck manip in the previous run because your comparison is null, I'm certain you know this, because you explicitly wrote in the submission that the difference is because of the cores. I'm asking you not to induce people who don't know about this fact to think this run features improved luck, which is what seems to be happening. I'll totally vote Yes if you can convince me that lsnes has more chances of synching an RBY movie on console. A small difference in the capacitance could introduce timer inaccuracies and make the movie desync. Also current accuracy tests to dismiss VBA are moot for RBY, since they test hardware corner cases that don't happen in RBY and sound registers that don't affect the internal game state in RBY; Also which accurate emulator is better, Bizhawk or lsnes? I suppose lsnes would make some difference if you emulate SGB, which you don't. I'll be sincere, it totally looks like you're picking cores at random to claim non-existent improvements on the published run, and I don't like the smell of that. I had a chat with turska, here's how it turned out:
<turska> p4wn3r: submissions with no improvements would generally be rejected, yes
<p4wn3r> independent of the emulator?
<turska> p4wn3r: I'd say yes
<p4wn3r> thank you
So, it's at least consistent with site policy to accept runs based on their improvements, not on the emulator they're running. I vote No.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Derakon wrote:
Enumerating the allowed serializable function pointers seems kind of brittle; every time you introduce a new function that you want to be able to serialize, you have to add it to the enum. I expect in practice that means you'd have a file in the project that knows about a huge proportion of the functions in the codebase. Making serializable function pointers into their own separate objects is a bit better (since you don't have that centralized list of things-that-can-be-serialized), but then you have to deal with giving the function access to member variables in its parent. Doable, but probably pretty fiddly. Maybe a decorator function could do the job, though?
Well, the enum solution has the advantage of being simpler, and as you said it can put everything that's serializable in a single file. I think that's convenient to have, but I love coding in plain C and I'm not your average OO proponent, so take what I say with a huge grain of salt :) Anyway, I agree that it doesn't scale well. If you want persistence for a large number of things, you'll likely have to drop function pointers and resort to classes some time. I'm not into design patterns, so I don't know what's a decorator, but I'd solve these access problems by passing references/pointers to the variables the function needs to modify, or declare them protected and make these functions a method of a subclass.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
You could use enums for each function pointer that can be called, and serialize the enums instead. I think this is less hacky than the solution you propose, but suffers from similar problems, since you have to maintain the enums together with the functions. I understand you want to use Python's serialization API for all this, so it seems to me that the best solution is to forgo function pointers and use Java-style dependency injection. Basically you make all function pointers that can be called in the game object derive from an abstract base class, and inject this abstract class into the game object class. In this case, you'd be serializing a class, with the same effect of a function pointer. I also agree that you should avoid lambdas, using them is not worth the security risk, but you seem to have some intense coupling if there are lambdas that capture the whole application context.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
This thread is already long enough, so not many people will read this. I didn't think the post would be that long, but I still took some time to write it, so I'll post anyway.
[Numbers mine] 1- Pressing Up+Down or Left+Right on controllers that don't normally allow it 2- Pressing buttons that don't exist on normal controllers but that are still read through the controller port 3- Streaming arbitrary data through an external port 4- Partially disconnecting a cartridge 5- Swapping discs when not prompted 6- Swapping discs with a completely different game 7- Witnessing the ending without completing the normal game's objective 8- Getting a game beaten state in memory but without witnessing the normal ending 9- Resetting during a save operation 10- Starting a run with dirty memory (excluding save data)
1) Valid, just because normal controllers don't allow it doesn't mean it's impossible, it's all possible user input anyway, and some other controllers provide that functionality. 2) Valid, it's possible user input. If the programmers failed to catch all the cases where it could interfere with the game, it's a programming error, like any other. 3) Valid, for the same reason as 2). External ports expect input, and the game should take care of it, if it doesn't, it's a programming error. 4) Valid, the game should be prepared for the absence of external data and should handle that error appropriately. It's a horrible programming practice to assume that external resources will always be there. If the game doesn't abort or handle the error when it's without such resources, it's (again) a programming error. 5) Valid, see (4). 6) Has this ever been useful? I'm not familiar with multi-disc runs. Anyway, it's valid, see (4). 7) and 8) These points are different from the others, because they deal with what the run does, and not how it does it. We have to consider case by case, no magical hardware criterion exists to convince people that a game ended. Obviously, for different games, different criteria will be considered more important. We have an excellent way of deciding if what a run has done merits publication, it's called judging. No clear line needs to be drawn here. 9) Valid, it's the game responsibility to make sure that save operations are atomic, and signal that the save has been corrupted if they fail to complete. If for some reason, that doesn't happen, it's an oversight in the game's code. 10) There's common dirty memory, which is just data generated by random noise, and another with reasonable amount of information, the second is not totally unrealistic because there could be other things executing in RAM. I'm inclined to accept both. If the game behaves unexpectedly because of uninitialized data, it's the code's problem, not the user's. Frankly, arguments as "the game should be played like X" are useless for TASing. It's just not how things work here, we already play games at extremely unrealistic conditions. We're definitely not interested in how things were intended to be, we're interested in how they are. There's a saying I like when I teach people the C programming language. When the specification says that accessing arrays out of bounds is "unpredictable behavior", it means that doing so might erase your HDD. That's not merely a theoretical example, if someone exploited a buffer overflow, he could seize control of your program and with some privilege escalation, run an "rm -rf /" and your files will be deleted. Nearly all security vulnerabilities exist because the computer has the nasty habit of behaving how you told it to, not how you meant to tell it. This is completely different from interfering with hardware. If a programmer puts a JMP instruction and it doesn't jump because you dropped your console on the floor or the CPU got different data because of a cheat engine, then hardware behaved incorrectly in an artificial way that just can't be anticipated. No programmer can write code that's safe against transistors burning or devices that make the code itself behave differently. There are some arguments against my position, but I find them unconvincing because they make at least one of the following mistakes: a) Ignore the fact that we treat runs with light or no glitch abuse differently, almost always in separate categories. The obvious consequence is that people who have a preference for such movies won't see them obsoleted by heavy glitches they don't like. b) Imply that low-level exploits are against some "spirit" of TASing, which only makes sense in very competitive environments that emphasize skill. Most people would agree that making TASes is much more about insights and thinking about corner cases to the gameplay than skill and competition. Rules intended to make runs challenging and competitive don't have much place here. c) Assert, with little evidence, that such movies might give bad impression to viewers. At first glance, this point is laughably wrong, given the surge of interest when a game is broken for the first time. d) Shoehorn stuff to make glitch abuse equivalent to cheating. Usually it goes like: glitching => heavy low level glitches => punching hardware so that it works differently => gameshark => rom hacks => making your own architecture and run game data on it. Of course, I can start with white and iteratively shoehorn it into darker tones of grey until I reach black, and claim that white is black. That's an obvious mistake, and not even a very creative one.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Warp wrote:
So my question is: How fast would a wheel have to spin in order for its outer edge to age, let's say, 10% slower than its surroundings? (In other words, for every 10 seconds that pass, the edge of the wheel only ages 9 seconds.)
In SR, dT²=dt² - dx² - dy² - dz². Changing to the coordinates of a frame rotating with angular velocity w, we have x=r cos(theta - wt), y = r sin(theta - wt), after evaluating the differentials with some algebra, we get to dT² = (1-w²r²)dt² -dr² - r²d(theta)² - dz² - 2wr²dt d(theta). The metric becomes singular for wr larger than the speed of light, but since spacetime is still flat, it's just a coordinate singularity, not a physical one. Bodies at rest in this frame are rotating at angular velocity w in the inertial one. So, setting dr=dz=d(theta)=0. We have dt/dT = 1/sqrt(1-w²r²). tl;dr It's (maybe surprisingly) the old time dilation equation dt = gamma*dT. For the situation in the statement the inertial frame must measure a clock cycle at the edge 10/9 larger. Thus, dt/dT = 10/9, which implies v ~ 43,6 % of the speed of light.
Mitjitsu wrote:
I know when you're working out projectiles at school you're always doing it without factoring air resistance. Does anyone know how to factor it into calculations?
For a more complete answer, there's a parameter in fluid mechanics called Reynolds number. It measures how strong are the resistance forces compared to the body's inertia. This number is important because its value indicates which approximation to use. For low Reynolds numbers, drag can be considered proportional to the body's velocity. For higher ones, it is proportional to the square of velocity. For very high Reynolds numbers, turbulence has a considerable effect. At this point, chaos happens. You could try to derive some solutions, but you wouldn't be able to use them for anything useful. According to my aeronautics engineer friends, what they do is make some big restrictions to the pressure, temperature, etc. of the fluid, make experiments, run simulations in supercomputers and try to derive formulas using numerical data. The problem is that those formulas are very complicated and only work for a very specific subset of the problem. With current knowledge, no unified way of dealing with turbulence exists.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Warp wrote:
This idea can be reversed: If we don't allow cheating devices, one of the reasons being that it makes the whole concept of "speedrunning" completely moot, then why should we allow glitch abuse that effectively amounts to the same thing? Just because it's doing it by abusing bugs in the game doesn't render the end result much different. There's still no "speedrun" to speak of, just a jump to the very end. What's there to watch?
You raise an interesting point, but you put it too simple. People in the past didn't suggest using cheat codes to complete the game in the title screen, they proposed them as a tool to complete otherwise impossible goals, but they still wanted to maintain the gameplay. Not allowing them is a question of arbitrariness. If you're going to use a code, which one? How do you decide which one is the best to use? If 50 people submitted 50 movies completing a game in the fastest way possible using 50 different codes, how do you decide which to accept and which to reject? If you're missing too much entertainment by disallowing cheats, then make a hack, change some stuff so that it doesn't look like the original and submit a movie, the run will be judged on its merits as a run of a hack. There's still no reason to use cheat codes. Game breaking glitches in official games are different, by playing a good dump on an accurate emulator we can assure with reasonable certainty that everyone who has a copy of the game could theoretically perform them without looking for obscure hacks or using third-party devices. If the glitch is dumb to the point of destroying all value in the movie, some people might still be entertained just by knowing that it's possible to skip the whole game, in this situation we can always publish it separately. If it doesn't entertain, reject it and tell the author to ignore the glitch in the future. Anyway, simply let it come forward and be judged, like all the others. There are really two messages here: one is that the specific Pokemon Yellow run that uses save corruption is uninteresting and doesn't deserve publication, which is sound criticism, and the other is that some speedrunning "spirit" makes this run and any other that abuses game skipping glitches equivalent to cheat codes, which is problematic.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I feel a little sad because my post was used to reject the submission, but well, it happens. A judge has to give a clear verdict: accept or reject. Although I abstained from voting, I think I would side with feos if pressed against the wall. I don't think the SMW run is a good comparison because there the input ends with the game in its final state and it shows it is there with the end screen. Here this doesn't happen, the game is not in the final state, as can be seen clearly from the infinite loop. I remember from previous tests that depending on where I called the credits routine the game would let me even examine the TV if I pressed A. So, the SMW run is not a counter-example to feos' criterion. People could say that while it may not be a well defined ending, it looks like one. My problem with this is that deciding if it looks like an ending is actually more difficult and more subjective than deciding if it is an ending. IIRC this is the first Yellow submission where people have doubted the game's completion. And in this line of reasoning, SMW is actually a precedent against this run, since it was accepted because the ending routine was accurate without concern for the ones that thought it didn't look like an ending. I disagree that this decision leaves a VBA run published forever. You could say that the category is useless and have a code injection playaround obsolete it. Surely, since console verification became possible, the site has been turning towards more accurate emulators (even if such emulators might have performance-reducing architectural choices that make SNES games run at 50% speed on my 2009 laptop). However, we all know that even crappy emulators are very accurate at well-known games, so the difference for RBY, if it exists, is minimal. Don't get me wrong, I'm 99% sure that the published run has no chance of synching on GB hardware, but I'm also not sure if any current emulator can do that, because this game's entropy depends heavily on timing. I have no issue with obsoleting runs on inaccurate emulators, if you can justify it well. If it was proven that a Bizhawk run would sync on hardware, it's perfectly fine and there would be few complaints, it would still be a demand of accuracy AFAIK unprecedented, but I'd be totally fine with it, I would just ask why this requirement wasn't made before for SMW, where the accurate emulator existed at the time of submission. If you can't prove the verification works, then we should obsolete runs every time inaccuracies are fixed. This has very obvious drawbacks that I won't bother discussing here.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Here I am, risen from the darkness! \o/ Nice run, Masterjun and Fractal, you've done a very nice job reverse engineering the game to find the routine that prints the credits. I didn't try this approach some months before because it was a strange ending, if you override the map script pointer with that address the game just prints the credits, technically you're still in your room, and of course since the map script function executes forever, the credits get in an infinite loop. And it looks like you called the middle of the routine, resulting in a garbled mess. I really don't know what to do with this run, this game's become a complete mess already. EDIT: To clarify, ending through normal gameplay goes like this: 1 - Ash enters Hall of Fame 2 - Hall of Fame script is called, makes Ash move, talk to Oak and calls ending sequence. 3 - Ending sequence prints the credits 4 - Ending sequence returns and game waits for reset. The previous run starts the ending from step 2, it calls the HoF routine and from there it proceeds normally. This run instead starts from step 3, it (clumsily) calls the ending routine, and the game never waits for a reset in step 4, the credits are called again and again, because that's how map script routines work. Do I think the ending is invalid because it doesn't follow that sequence? Absolutely no, if that was the case a run that simply goes to the "wait for reset" phase is a valid ending. I think it's subjective whether the credits, or the data saving, or the normal gameplay sequence is the most important.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Did you consider using the built-in ROM input function? I've read the GBC documentation and it seems that to call it you'd have to attempt to write the value 3 to any address on the range 0x2000-0x3fff (this will switch the 0x4000-0x7fff area to bank 3), and then call 0x4004 or 0x4000 (there's a logic right after 4000 that may cause the routine to not read the input, calling 4004 will make it read for sure). I still have to read more if there's any chance of an NMI coming and switching the bank away, though this likely doesn't happen (unless the guys who designed the architecture want to give headaches to developers). I'm not very sure if this saves time though, since you're already using very few bytes. The biggest advantage of this would be getting 8 bits a frame instead of 4, but the bottleneck is clearly scrolling time. EDIT: It seems things get much simpler if you can just use the instruction HALT, after going back from the HALT, the first thing the game does is read input to FFF5, and it goes back from the halt once per frame, so the logic to check if input changed goes away. What happens when you use a HALT? You're running the code in that region through a function pointer, being able to HALT correctly in that part might be doable, maybe we're just one memory write away from it.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
In my notation, á = /a/, é = /ε/, ê = /e/, for the others I could find suitable words. AKheon = Á Keon Acmlm = Á Sê Emmy Élli Emmy Sonikkustar = Soni Kuhstar foda = Fó (o like in nod) Dah adelikat = á dêli cat Nach = Nátch Dada = Dá Dá jlun2 = Jay lun two feos = Fé os Dacicus = Dá see cus Bisqwit = Bisk wit klmz = Kah Élli Emmy Zê Scepheo = Sé Phêo antd = ant dee aqfaq = ák fák Basically, when I see an A that I don't know how to pronounce, I assume it's said openly, the way it's most common in my language (/a/). For the E's I usually assume they're open too (/ε/). And in the names Acmlm and klmz, I just pronounce the individual letters in portuguese.
Experienced Forum User, Published Author, Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
scrimpeh wrote:
That's a better approach, but the best jokes would work only with the subject matter. Cut and paste jokes will only get us so far as well.
Or we could just make fan TAStic puns.
1 2
16 17 18
34 35