Posts for Tub


Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Derakon wrote:
Interesting that it's faster to kill Kraid than to use the Norfair back door. I guess the lava swim requires too many energy tanks for that route to be worthwhile?
There are plenty of e-tanks on your way to norfair and another in the lava, that's not a problem at all. But to use the backdoor, you must: - make a larger detour for charge beam - get grapple - sequence break a super missile not from spore spawn (which is slow) - get slowly through the lava and bomb your way through the speed booster room If we time both routes from Varia to Speed Booster and include the times for grabbing charge beam, we'll notice that the Kraid route is ~103 seconds faster (406 vs 509 seconds). Now you might think that exiting through the back door gets you closer to the norfair power bombs, but bombing though the SB room and traversing the lava again is actually slower, the kraid route gains another ~12s. The Kraid route will end up with less e-tanks and without grapple, but instead it'll get more SM packs and Spazer. I got the times by comparing with the any% run I linked here. That run appears to be less optimized than hoandjzj's, but it's very unlikely to improve the norfair route by ~115 seconds. What bothers me is that the other run managed to skip both the L3 and the L4 lock. Hoandjzj got both, each being a large detour. Skipping them would require the XRay scope (almost on your route) and some detours inside the space pirate ship, but it still appears to be faster to skip them.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
From what I understand it's a specific glitch in the shop that allows you to open the inventory mid-sale. It's only possible during a 1-frame window. When dumping zora eggs, you can't access your inventory. I'm sure they've tested every single frame, twice.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Not sure why we're even voting. Publish & star please.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
desyncs for me in the room before varia. Which version of the hack did you use? Watching the youtube encode, this is a clear yes vote. The run looks very optimized, the route appears to be good, and I think this hack can finally obsolete mockingbird. (Although given the amount of high quality SM TASes, I still think it's a pity that many of these will never find a place on this site. Can we have the Phazon run obsolete mockingbird, then MSZM obsolete Phazon or something?)
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
The only thing worthy of publication is the submission text. The video is just boring, even when you know what's going on.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Neither of the two TASes looks fully optimized. I don't have a .smv to prove otherwise, but there are situations where it just looks like a corner could have been cut better, or a short charge could have been done differently etc. 4N6: I encourage you to actually play the hack. It's a good one, though your route and playing experience will differ a lot from the TASes you see.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
there's an any%-run on youtube, I found it embedded here: http://www.metroid-database.com/forum/viewtopic.php?p=149620#p149620 /edit: I've downloaded the 100% run off nicovideo, but it seems to end after snatching the SM behind spore spawn. Is that a problem with my download or does the video actually end there?
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Warp wrote:
A more or less average but good FPS, but definitely a disappointment compared to all the hype?
maybe I missed the hype. Considering it's been in development hell for a decade, was almost canceled and pronounced dead several times, I think it turned out quite well. Maybe that's the reason I liked it: low expectations. Seeing that you played bought Quake4 and Crysis, I consider DNF to be more fun than either. Crysis has these beautiful gfx, but I don't think the gameplay is any fun. DNF is different enough from HL2 and Modern Warfare that you can play buy all of them without one diminishing the other. HL2 manages to tell a good story. MW has these beautiful interactive cutscenes called "levels". And DNF just features braindead, over-the-top action with the duke. I guess it's a matter of taste.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Hoandjzj wrote:
The Missile pack "above the beginning of the long shinespark to the red tower" requires shinerspark (and one shot to reveal it, of course :) if I collect it, then I have to shinerspark again, in total it'll take more than 4s :D
You're visiting the room twice. At 18 minutes, you would need to charge again, that's true. But when visiting again at 24 minutes, you don't need a shinespark afterwards, so you could charge one, spark up and grab the missile without having to retreat. Would still take too long, though. If the missile pack above the Dark Beam was too slow to get, then this one is even slower.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
I actually liked the game. It's not the revolutionary new game that Duke3D was back in its days, but it's a solid shooter nonetheless. I like the amount of variety in the game. Each level looks different, the small sections are a welcome change, minigames are fun enough and the weaponry - while hardly new - is still fun. Add the right amount of boss fights and you've got a good game. I also liked the duke. I can't understand why someone would play the duke, then complain about the immature macho one-liners. It's the duke as we know him, as he should be. The gfx were ok. Some may call them outdated, but then again my three favourite games are from 1994, 2000 and 2005, so screw that. On a less positive note, the loading times were awful, the difficulty curve was quite bumpy and the console-induced restriction to two weapons sucks. But meh, didn't stop me from having a good time.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
I've finally finished the hack so I could watch this TAS without spoiling anything. It was a pleasure to watch, thank you Hoandjzj! The slow Ridley fight looks bad :/ - any additional missile pack should save 3-4 seconds. Have you considered the missile pack above the golden torizo? Maybe the one above the beginning of the long shinespark to the red tower? - any additional phazon missile pack should save ~10 seconds, though I don't remember any near your path. :/ Also, why didn't you group the Metroid/Core X? If you freeze several at the same position you can hit all of them with one missile. Compared to Mockingbird Station, both hacks have some strong points and some weak points. Mockingbird had a few very interesting rooms (e.g. the spike room to grapple, the phantoon fight), but also a lot of dull rooms you had to endure several times due to all the backtracking. Phazon doesn't have as much backtracking (just a few quick visits to the central hub), but pointless long walks through empty rooms to get anywhere. For the first half of the run, samus doesn't appear to be in any danger. Not sure which one I prefer.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
IIRC (so don't quote me on that) that's a limitation of the hit detection. Your sword will usually stay inside the enemy for several frames, but it's only supposed to damage each enemy once. Thus, the game keeps a list of "<enemy> cannot be hit again by <damage source> for <x> frames". The trick is to apply enough damage sources at the same time, to make the list overflow. The game will evict the oldest entry, and its damage source can hit again, creating a new entry, evicting the next oldest, which will hit again etc. Since gocha didn't have any objections to the route or implementation, I guess it's safe to say yes. :)
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
It's interesting to see the theories people come up with to dismiss the blatant plot holes. Though no matter which theory you subscribe to, there are still plenty of blatant plot holes left, so the likely case is that there is no deep, hidden story behind it all; the writers were just really good at writing blatant plot holes. ontopic: in the meantime, I finished watching the run, and I've already voted yes.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
DarkKobold wrote:
6. That was a speed/entertainment trade-off. Originally, I had plan to do a verbal commentary on the run, and was going to point out the 5 or so different "eyes on me" variations that occur in the game. (It only costs about 10 frames, since you have to select different instruments anyway). However, I just wanted to submit, and haven't gotten to do a verbal commentary. Next run I may do one.
Didn't the different scores lead to different dialogues between Z and Rinoa? I remember playing that scene three times to watch them all, and IIRC they were of different lengths.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
I figured it out. Although I doubt anyone else needs it, here's the open office macro code drop. JavaScript used to crash, so I had to use basic. Ugh. Usage: DROPRATE(kills, drops, confidence) e.g. =DROPRATE(100, 48, 95) Syntax highlighting is slightly off because the forum doesn't seem to know vbasic syntax.
Language: basic

Dim fun as Object Function DropRate_PValue(p, n, m) Dim expected, offset as double Dim high, low as double Dim pval as double Dim args(1 to 4) as Variant Dim rargs(1 to 2) as Variant expected = p*n offset = abs(m - expected) pval = 0 high = expected + offset if (high <= n) then rargs(1) = high rargs(2) = 1 high = fun.CallFunction("ceiling", rargs() ) args(1) = n args(2) = p args(3) = high args(4) = n pval = pval + fun.CallFunction( "B", args() ) end if low = expected - offset if (low >= 0) then rargs(1) = low rargs(2) = 1 low = fun.CallFunction("floor", rargs() ) args(1) = n args(2) = p args(3) = 0 args(4) = low pval = pval + fun.CallFunction( "B", args() ) end if DropRate_PValue = pval End Function Function DropRate_BinarySearch(dir, low, high, n, m, target, iter) Dim test, pval as double test = (high+low)/2.0 if (high - low < 0.00001) then DropRate_BinarySearch = test Exit Function end if if (iter > 20) then DropRate_BinarySearch = -1 Exit Function end if pval = DropRate_PValue(test, n, m) if (dir * pval > dir * target) then DropRate_BinarySearch = DropRate_BinarySearch(dir, test, high, n, m, target, iter+1) else DropRate_BinarySearch = DropRate_BinarySearch(dir, low, test, n, m, target, iter+1) end if End Function Function DropRate(n, m, confidence) Dim expected as double Dim a, b as double fun = createUnoService( "com.sun.star.sheet.FunctionAccess" ) if (m > n) then DropRate = "#ERR n > m" Exit Function end if expected = m / n confidence = (100-confidence) / 100 a = DropRate_BinarySearch(-1, 0.0, expected, n, m, confidence, 0) b = DropRate_BinarySearch( 1, expected, 1.0, n, m, confidence, 0) DropRate = Format(a*100, "0.00") & "% - " & Format(b*100, "0.00") & "%" End Function
Thanks for your help, everyone!
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Alright, I don't have friends, and most gamers would rather just play than maintain spreadsheets. Counting drops is trivial (look at your inventory), counting how many foes you killed on your way there is not; it requires some listkeeping which will slow down your farming. I've tried asking, there aren't many who contribute numbers. I've tried the H0-approach. Your 48 drops of 100 kills yield: 95% confidence: 38.13% - 58.00% 99% confidence: 35.50% - 61.00%
Tub wrote:
I throw the coin 10 times and get 9 tails (0) and 1 heads (1) x_m = 0.1 sigma_m = sqrt( ( 9*0.1^2 + 1*0.9^2) / 10*11 ) = 0.09045
old method (95%): 0% - 27.73% H0-method (95%): 0.51% - 45% H0-method (99%): 0.10% - 51.23% and to have a comparison with my prior-probability-attempt: bayes (64.41%): 1% - 19% H0-method (64.41%): 4.30% - 25%
Tub wrote:
Now I do this again, 100 times, getting 10 heads. x_m still 0.1 sigma_m = sqrt( ( 90*0.1^2 + 10*0.9^2) / 100*101) = 0.02985
old method (95%): 4.15% - 15.85% H0-method (95%): 5.45% - 17.50% H0-method (99%): 4.24% - 20.50% These numbers look a lot better. The first approach doesn't always exclude 0%, and I'm 100% confident it's not 0%. so.. should I just pick the numbers that look best? o_O Unfortunately, I have no formula to get the interval, I'm using a binary search until it's narrowed down sufficiently. Which sucks, because I see no way to implement it in oocalc. Simple formulas don't support loops, and touching the macro-functions has lead to a surprising amount of oocalc-crashes. :/
Nitrodon wrote:
Tub wrote:
Though the result wouldn't be a confidence interval any more, so it's a completely different statement I'm going to make.
This would indeed be a confidence interval.
According to everything said in this thread, there are two wildly different methods to generate those intervals, both yield vastly different numbers - yet both are considered perfectly fine confidence intervals? How is that possible? Isn't there a strict formal definition for that term?
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
just a small aside, because I need to get to work: the game in question is an online-game, so I can neither add lua scripts, nor disassemble, nor observe the RNG. Conversely, I'm not interested in manipulating these drops (I wish I could!), but simply in having knowledge of the game; not for a guide but for a wiki. Tested drop rates are all I got. (Well, not just for knowledge per se. You could base item or gold gathering decisions upon the drop rates. Monster A takes ta seconds to kill and has a pa drop rate, Monster B takes tb seconds and has pb. Which one do I hunt? That's why said wiki often lists drop rates as #drops/#kills, which I feel is inadequate.)
[..]central limit theorem[/..]
Wouldn't that again require large amounts of samples? Reading your post, it occurred to me that I could just formulate an infinite number of H0's and define my confidence range as "all possible drop rates I cannot exclude with p<5%". That does indeed get rid of the prior probabilities, I'll have to see how the math turns out and which values I get. Though the result wouldn't be a confidence interval any more, so it's a completely different statement I'm going to make.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Wow. You're tas'ing two discs faster than I'm watching one! I'd vote yes on the funny submission text alone. Expect the actual yes vote in a few days/weeks, after I watched it all, i.e. after this has been published :)
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Thanks for your insights bobo, that was an interesting read. Though I'm not interested in determining whether the coin is fair, I know it's unfair and I wish to determine it's actual chances. This was just an example for another problem I listed: monster drop rates. There can be no assumption that "the drops are fair" (p=0.5), so it's difficult to formulate a null hypothesis. So - if working with a prior probability - I don't see any better approach than equal likelihood for 0 < p < 1. I could say "it's surely below 50%" and model equal probabilities 0 < p < 0.5. But how would I justify that constraint? In other words, I don't pick indifference out of principle, but because it's the closest thing I have given my prior knowledge. Of course any prior probability is an uglyness, but the other approach isn't free of ugly assumptions, either. :/ The article on lindley's paradox also mentioned another interesting bit:
Because the sample size is very large, and the observed proportion is far from 0 and 1, we can use a normal approximation for the distribution
I don't have large sample sizes (10 to 500, depending on the monster), and since I'm only interested in drop rates for rare (=interesting) items, my initial estimates are somewhere around 1-10%, close to 0. So shouldn't I avoid modeling by normal distribution?
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Sure, approximations are often needed, but you need to be mindful where you're doing an approximation (and how much you're off), and my experience in stochastics is often that this step is completely ignored. Or maybe I only talked to the wrong people. Didn't mean to sound ungrateful for your help, though. Thanks! So let's do the integral: Pn,m(p) = p^m * (1-p)^(n-m) Pn,m(p) = p^m * sum(k=0 to n-m) (n-m choose k) (-1)^k p^k Pn,m(p) = sum(k=0 to n-m) (n-m choose k) (-1)^k p^(k+m) integral = sum(k=0 to n-m) ((-1)^(k) (n-m choose k)) / (k+m+1) * p^(k+m+1) let's retry my older numbers:
I throw the coin 10 times and get 9 tails (0) and 1 heads (1) x_m = 0.1 sigma_m = sqrt( ( 9*0.1^2 + 1*0.9^2) / 10*11 ) = 0.09045
int(0 -> 1) P10,1(p) = 0.0090909090909084 int(0.01 -> 0.19) P10,1(p) = 0.0058386180381549 64.41% chance of P(X=heads) being in that interval, while your formula says 68.27% for one stddev. That's close enough to believe that I don't have an error in my formula, but far enough to believe that your suggested approach isn't too useful for low n.
Now I do this again, 100 times, getting 10 heads. x_m still 0.1 sigma_m = sqrt( ( 90*0.1^2 + 10*0.9^2) / 100*101) = 0.02985
I'm getting 77.6% instead of 68.27, though with summands like (90 choose 45)/56 * x^56 rounding errors are bound to be problematic. I'll try again tomorrow with a proper math library. Any hope of having this as a neat formula in OOCalc went out the window, anyway. :/ I could solve the die-case by independently examining 6 different random variables: Xi = 1 when the die says i, 0 otherwise. Would my confidence interval for throwing a 1 be any different if 12 dice throws are either each side twice, or one twice and six ten times? My guess for P(X=1) is 1/6 in both cases, but would a high amount of sixes make me any more or less confident about that guess? I don't think so, but as always I may be wrong. (I'm not interested in a dice's average value, which I understand to be a standard case for the approach you suggested. I want to know the distribution of X, the probabilities for each side.)
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
But wouldn't the partitioning of results adhere order? The samples (0,0,0,0), (1,1,1,1) will yield different results than (0,1,0,1), (0,1,0,1), even though both have a clean 50/50 split. You'll also get different values depending on the partition size. If you take a run of 1000 samples, and partition it into 10 samples of 100 each, you'll get different results than with 5 samples of 200 or 20 samples of 50. Will the result actually allow more precise statements than "the std-dev from the thing I just measured is x"? Can that approach yield *strict* values for the probability interval of P(X='head')? This looks like a lot of mumbo-jumbo to me, something I often hear in stochastics.. "yeah, totally wrong, but the law of large numbers evens it out as we approach infinity. Don't worry!" :/ The thing is, I can't just pull samples out of thin air. I can't get 100 samples, n times. I'm lucky if I have 100. I'm very much aware that that is a small number and the calculated mean is unreliable. I'm interested in knowing how unreliable it is. But for this to work, I need maths that don't require thousands of samples to be accurate. My approach would be this. To start with a discrete example, assume two coins: Coin A has 1/3 chance heads Coin B had 2/3 chance heads I pick a coin randomly, I want to determine which one I got. I'm tossing 10 times, getting 4 head, 6 tails. If I'm tossing coin A, then the chance for 4 head is 1/3^4 * 2/3^6 * (10 choose 4) = 64 / 59049 * 210 ~= 22.7% If I'm tossing coin B, then the chance for 4 head is 2/3^4 * 1/3^6 * (10 choose 4) = 16 / 59049 * 210 ~= 5.7% We now know P(4 heads|picked coin A) = P(picked A and got 4 head) / P(picked A) = 0.5 * 22.7% / 0.5 = 22.7% I'm interested in P(picked A | got 4 head) = P(picked A and got 4 head) / P(got 4 head) = 0.5 * 22.7% / (0.5 * 22.7% + 0.5 * 5.7%) In other words, we just consider the sum of both probabilities as 100% and scale both values accordingly. This yields 80% chance that I'm holding Coin A, 20% chance that I'm holding Coin B. Pretty much what I expected: "It's probably Coin A, but too soon to be sure." Tossing 100 times, getting 40 head will yield 99.99990% chance to hold Coin A, which would then be a very convincing argument. So far, is there a flaw in my line of thought? Now let's extend this to the non-discrete case, There is an infinite number of coins with every possible probability, and we picked one. Equally, there is just one coin with an unknown probability. There's an implicit assumption here: that every coin has the same chance to be picked. In other words: the unknown coin we're holding can have any probability, any is equally likely. Since we don't know anything about the coin we're holding, I think that's the only valid assumption.[1] Now we define a function Pn,m(p) := if we toss a coin with probability p exactly n times, how likely will we get m times head? Pn,m(p) = p^m * (1-p)^(n-m) / (n choose m) Get rid of the terms that don't involve p, they'll cancel each other out later: P'n,m(p) := p^m * (1-p)^(n-m) We now need to find the integral from 0 to 1 to get our "100%" to which we need to scale. If we calculate the integral from a to b, divide by the integral from 0 to 1, we'll get the chance that the coin's probability for heads is between a and b. Correct so far? Based on that, we need to find suitable a and b to get our target confidence. Worst case, we'll use a binary search to narrow down the interval until we get one we like. There's just one slight problem: does anyone know how to calculate the integral of Pn,m? ;) Wolfram Alpha timed out trying to solve it and it's way beyond my highschool-math-capabilities. [1] We could guess a better initial distribution based on samples, but that would already introduce a bias. I'm not trusting my samples to be accurate, so the only safe assumption is that - even despite my gained knowledge - the coin could be anything. We're already considering the samples by eliminating the coin-probabilities that were unlikey to yield my samples; considering the samples twice would be wrong.
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Ok, I'll try some more values. I throw the coin 10 times and get 9 tails (0) and 1 heads (1) x_m = 0.1 sigma_m = sqrt( ( 9*0.1^2 + 1*0.9^2) / 10*11 ) = 0.09045 Now I do this again, 100 times, getting 10 heads. x_m still 0.1 sigma_m = sqrt( ( 90*0.1^2 + 10*0.9^2) / 100*101) = 0.02985 For the 68% certainty case, do I just fit an interval around x_m to get [0.1 - 0.09045, 0.1 + 0.09045] or is the result more involved? For 95.56% I'd get 0.1 - 3*0.09045 as the left boundary, which is negative, so I'm not sure I got this right. Does it even make sense to model a probability as gauss/normal distributed, ignoring the known boundaries of [0, 1]?
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Ok, say I flip a coin twice and get "heads" both times. With a fair coin, there's a 50% chance of getting the same side twice, so it's nothing unusual or conclusive. Let's encode "tails" as 0 and "heads" as 1, so we have x_1=1, x_2=1, n=2 I'm interested in P(X=1), because I don't trust the coin. x_m = 1, so P(X=1) = 1 is my initial guess. What's the likely range for the actual probability? sigma = sqrt((0+0)/1) = 0 sigma_m = 0/sqrt(2) = 0 In this case, the formula would suggest a 100% chance that my guess is correct, which is obviously wrong. Did I apply the formula wrong? If it breaks down with n=2, why would I trust it on larger samples?
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Less of a challenge, more of a "help me with this". Let's say I have a discrete random variable with a known, finite set of possible outcomes. My dice could yield {1,2,3,4,5,6}, my pizza service could deliver {pizza with onions, pizza without onions} when I ordered without, the monster I've slain could drop one of {nothing, +5 Sword of Penis Envy, +3 Boots of Asskicking}. (Samples are of course independent from each other; we do not assume my pizza service to learn from past mistakes.) I want to determine the probability distribution by aquiring random samples. I can guess a distribution by dividing each outcome by the number of samples, but how likely is my distribution to be anywhere close to correct? Flipping a coin once would yield 100% chance heads, 0% chance tails. That may be my best guess given the data, but it's still useless. The quality of my guess is going to improve with the number of samples, but is there a way to quantify the quality of my guess? I'd like to make a statement like: "Given these samples, it's 95% likely that the chance to roll a 6 is between a and b." That would reduce the one-coinflip measurement to the statement: "The chance to flip heads is probably between ~5% and 100%." which is a lot less misleading. It seems like confidence intervals fit that definition, but I fail to apply the math listed there to my problem. Further googling didn't turn up anything useful, either. Can someone more immersed in statistics help?
m00
Tub
Experienced Forum User
Joined: 6/25/2005
Posts: 1377
Provide more info. Mobo-model, CPU model, PSU model? Your PSU should contain an ATX version number. Which one? At what wattage is it rated? Your mobo should contain some information about the required ATX version as well. This may be useful, if you need in-depth info about the connectors of each revision. I don't know which CPUs require more than 2 lines. All I know is that my Athlon 64, rated at 65W, is happy with two.
m00