Joined: 10/11/2012
Posts: 48
Warp wrote:
Suppose there existed a program that answers this question: Given a natural number (in base 10) and a set of mathematical operators, does there exist a closed-form expression that uses those operators (and numbers in base 10) that is equal to that input number, and that uses less symbols than the input number? (I'm counting each digit as a symbol, as well as each individual mathematical operator.) I'm wondering what the computational complexity of that program would be.
125 = 5^3 (assuming the three is actually superscripted) Right? It doesn't work for 1 or 2 cubed, but it would work for all other numbers.
Player (146)
Joined: 7/16/2009
Posts: 686
Warp wrote:
Suppose there existed a program that answers this question: Given a natural number (in base 10) and a set of mathematical operators, does there exist a closed-form expression that uses those operators (and numbers in base 10) that is equal to that input number, and that uses less symbols than the input number? (I'm counting each digit as a symbol, as well as each individual mathematical operator.) I'm wondering what the computational complexity of that program would be.
That'll depend on the set of permitted mathematical operators. The only way (I can think of) that such an algorithm could work is brute force: create and test all possible expressions that consist of less symbols than the number until you either find one or exhaust them. This means that every expression needs to be evaluated, which is where the problem lies. After all, the number of symbols in a number is log(n), and the number of possible expressions using x symbols from a set of y is y^x, so the number of possible expressions is y^log(n). If we can evaluate an expression (say, we allow only basic arithmetic operations) in O(1), this gives a complexity of O(y^log(n)). If we allow every possible closed-form expression, however, this quickly escalates to busy-beaver levels.
Invariel
He/Him
Editor, Site Developer, Player (171)
Joined: 8/11/2011
Posts: 539
Location: Toronto, Ontario
You can simplify a lot of that by using lookup tables though, and divide and conquer. This won't give you an optimal solution (you may have to merge terms and such), but it will simplify searching for solutions to more difficult problems later.
I am still the wizard that did it. "On my business card, I am a corporate president. In my mind, I am a game developer. But in my heart, I am a gamer." -- Satoru Iwata <scrimpy> at least I now know where every map, energy and save room in this game is
keylie
He/Him
Editor, Emulator Coder, Expert player (2840)
Joined: 3/17/2013
Posts: 392
Scepheo wrote:
If we can evaluate an expression (say, we allow only basic arithmetic operations) in O(1), this gives a complexity of O(y^log(n)).
If I'm not mistaken, for complexity of arithmetic problems, the size n of the input N is how much N takes in memory, meaning n = log_2(N), and not N itself. So the complexity of the above algorithm is more about O(y^(n*ln(2)/ln(10)))
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Numberphile published today another "throwing yahtzees" video. The host asks what's the probability of getting a straight when throwing 6d6. That got me thinking: What is the probability of getting a straight if you were to throw five different dice: A d4, a d6, a d8, a d12 and a d20? (Obviously any straight that can be formed with those dice suffices. The dice don't need to be in that order to form the straight.) And while we are at it, what's the probability of throwing 5-of-a-kind with those dice?
Skilled player (1651)
Joined: 7/25/2007
Posts: 299
Location: UK
Warp wrote:
That got me thinking: What is the probability of getting a straight if you were to throw five different dice: A d4, a d6, a d8, a d12 and a d20?
There are 4 potential straights to make, easiest to visualize with a diagram: Here, we have 5 columns for the 5 dice, each representing their number range. The barely visible green lines represent the region where each of the 4 straights can take place, with a random example given in red. The probability for each straight occurring will be calculated separately. Each die needs to contribute some number to the straight. This needs to be in the straight's range, and has not been occupied by the previous dice. Thus they have probabilities : P(1:5)=4/4 x 4/6 x 3/8 x 2/12 x 1/20 =1/480 P(2:6)=3/4 x 4/6 x 3/8 x 2/12 x 1/20 =1/640 P(3:7)=2/4 x 3/6 x 3/8 x 2/12 x 1/20 =1/1280 P(4:8)=1/4 x 2/6 x 3/8 x 2/12 x 1/20 =1/3840 Thus total, P(Any straight)=(8+6+3+1)/3840=18/3840=3/640
Warp wrote:
And while we are at it, what's the probability of throwing 5-of-a-kind with those dice?
P=4x(1/4 x 1/6 x 1/8 x 1/12 x 1/20)=1/11520
ALAKTORN
He/Him
Former player
Joined: 10/19/2009
Posts: 2527
Location: Italy
Can someone figure out this RNG formula? RNG -> advanced RNG 0 -> 2684337210 -> 3668764404 1 -> 2473755763 2 -> 2263174316 3 -> 2052592869 4 -> 1842011422 5 -> 1631429975 10 -> 578522740 12 -> 157359846 13 -> 4241745695 20 -> 2767675566 100 -> 3101028990 101 -> 2890447543 Edit: Nvm figured it out myself: xnew = ((4126914633 xold + 2684337210) − (42528784 xold)) % 232 Edit: That doesn’t work for 2684337210 -> 3668764404…
Skilled player (1827)
Joined: 4/20/2005
Posts: 2161
Location: Norrköping, Sweden
The formula Advanced RNG = mod(-210581447*RNG + 2684337210, 2^32) works for all of them except 2684337210 -> 3668764404.
ALAKTORN
He/Him
Former player
Joined: 10/19/2009
Posts: 2527
Location: Italy
^Why did you use a negative number? Wouldn’t 4084385849 work all the same? The 2684337210 -> 3668764404 might be some kind of different RNG behaviour… I have to look more into it. Edit: Figured out the 4th and I hope last formula this RNG uses, and 2684337210 -> 3668764404 falls into it. I see what you did with your simplification there Randil, pretty basic stuff I should’ve known, lol.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
There's the famous problem of placing 8 queens on a chessboard so that they don't attack each other. But how about this: How many knights can you place on a chessboard so that they don't attack each other?
Skilled player (1651)
Joined: 7/25/2007
Posts: 299
Location: UK
32, as each Knight must move/attack to a different colored square, thus place 32 on the same color.
Editor, Expert player (2073)
Joined: 6/15/2005
Posts: 3282
Also, 32 is the most you can possibly place on a chessboard and how Flip described it is the only way to place them. A closed knight's tour exists for the 8x8 board; knights cannot be placed on adjacent nodes (squares) of the tour so at most 32 of them can be placed on every second node, i.e. all of one color.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
It's intuitively clear that √n > log(n) (which is why an O(√n) algorithm is slower than an O(log(n)) algorithm.) But when I started thinking how I could prove that's the case, it didn't seem so trivial after all. (Also, I think that proving that O(√n) > O(log(n)) is a separate issue from proving that √n > log(n). That's another problem.)
Player (36)
Joined: 9/11/2004
Posts: 2630
Log of what base? Because log2(n) is greater than sqrt(n) for certain small values of n.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Amaraticando
It/Its
Editor, Player (159)
Joined: 1/10/2012
Posts: 673
Location: Brazil
Let f(n) be √n - log(n), for n > 0, n real. I assume log is the decimal logarithm. So, f'(n) = 1/(2*√n) - 1/(ln(10)*n) Since
    f is continuous; f' has only one zero at n = 4/(ln10)², at which f(n) > 0 (a global maximum or minimum); f(100), for instance, is known to be bigger than f(4/(ln10)²);
We can conclude that n = 4/(ln10)² is a minimum of f and that f > 0 for all n's. Edit: tags/grammar
Player (36)
Joined: 9/11/2004
Posts: 2630
Fun fact: If you set 0 = sqrt(x) - log(y, x) where log(y, x) is log base y of x. Then you get y = x^(1/sqrt(x)) after a little bit of algebra. We need to find the maximum of this function, as this will give us the maximum base that will intercept sqrt(x). We find the derivative of x^(1/sqrt(x)) to be x^(1/sqrt(x))(1/x^(3/2) - ln(x)/(2x^(3/2))). The extreme value is where that is equal to 0. So we set it equal to 0 and solve for x, and we find that x=e^2 (we can verify this quite simply by observing that ln(e^2) = 2. And 1/n - 2/2n = 0.) We can verify this is a maximum by computing the second derivative and confirming that the curve it concave down. Therefore the maximum value of y such that log(y, x) intercepts sqrt(x) is y = (e^2)^(1/sqrt(e^2)) or e^(2/e) Therefore the log(x) of any base greater than e^(2/e) will always remain below sqrt(x).
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
I was examining the function sqrt(x)/log(x) (because I was comparing the growth rate of those two functions), and I noticed something interesting. For the sake of this examination, let's forget about the big jump around x=1, and only consider values of x significantly greater than 1. Rather obviously sqrt(x) grows faster than sqrt(x)/log(x). However, the latter grows faster than x1/4. (The two functions intersect somewhere in the vicinity of x=5500, where the fourth root becomes smaller than sqrt(x)/log(x).) I wonder what the limit of n is so that of xn never becomes smaller than sqrt(x)/log(x) (for values of x much larger than 1.)
Player (36)
Joined: 9/11/2004
Posts: 2630
My intuition says that any n < 1/2 will cause x^n become smaller than sqrt(x)/ln(x).
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
That might be so. When I was comparing sqrt(x)/log(x) to x1/3, at first it looked like they don't intersect, but it turns out they do indeed intersect somewhere in the vicinity of x = 24 million, where x1/3 becomes smaller.
Player (80)
Joined: 8/5/2007
Posts: 865
I'm a little late to the party, but here's a sketch of the proof that requires no calculus and relies on just one fact (aside from the algebraic properties of the log and sqrt functions): that both log(x) and sqrt(x) are positive for x>1. I'll be working in base 10 for convenience, but the proof is essentially identical in any base. Consider the function f(x) = sqrt(x)/log(x), restricted to the domain (1,inf). Both the numerator and denominator are greater than zero, so this is clearly positive. We wish to show that this fraction tends to infinity as x approaches infinity. We know sqrt(100*x) = 10*sqrt(x) and also log(100*x) = 2*log(x). Edit: *Gasp!* What am I doing? That should be log(100*x) = 2+log(x). I've edited the rest of the proof accordingly. So f(100*x)/f(x) = 10*log(x)/(2+log(x)), which we can show tends to 10 as x approaches infinity using the definition of the limit. We can now inductively chain things along, finding ever larger values of f(x) for corresponding large values of x, effectively completing the proof. Note that this proof doesn't require a base case for induction or rely on the continuity of the sqrt and log functions. As long as we accept that the sqrt and log functions are definable and positive for all real numbers greater than 1, no matter our starting point, the function blows up to infinity for arbitrarily large x. I stumbled over this proof for a bit because I realized I did not know what properties of the log function we are "allowed" to use. In my real analysis textbook (by Kenneth A. Ross), the log function is defined in the last section of the book.
Editor
Joined: 11/3/2013
Posts: 506
OmnipotentEntity is right, log(x) (or ln(x) or whatever, all logarithms are the same to within a multiplicative constant) grows slower than any x^k for k>0. Somewhat informally, when you multiply x by 10, you multiply x^k by 10^k; when you multiply x by 10 you add 1 to log(x). Even if k is tiny (so 10^k is only a little bit bigger than 1), when x^k is large enough then multiplying it by a number close to 1 has more of an effect than adding 1 - in other words at large enough x it grows faster. And out of two functions, the one that grows faster must eventually become bigger. So for sufficiently large x, x^k > log x for all k>0. Also calculus makes this pretty easy. You differentiate both sides: d/dx log(x) = 1/x = x^-1; d/dx x^k = k*x^(k-1), so for k>0 it is clear that the derivative of x^k grows faster (or rather, decays slower) than the derivative of ln(x). Then you just integrate both sides :)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
I think that also demonstrates why O(nk) > O(log(n)) (regardless of how close to zero k might be.)
Player (173)
Joined: 12/28/2007
Posts: 235
Location: Japan, Sapporo
Bobo the King wrote:
Note that this proof doesn't require a base case for induction or rely on the continuity of the sqrt and log functions. As long as we accept that the sqrt and log functions are definable and positive for all real numbers greater than 1, no matter our starting point, the function blows up to infinity for arbitrarily large x.
I don't think the continuity of f (or whatever like that) is superfluous; there seems to be a slight gap. First we see from the above proof that f(100*x)/f(x) = 10*log(x)/(2+log(x)) = 10-20/(2+log(x)) < 10, hence f(100^n*x) < 10^n*f(x) for any n. Assume we could choose infinitely many a(i), such that (1) 1<a(i)<a(i+1)<100 and (2) f(a(i))>10*f(a(i+1)) for all positive integers i. Then we could show that for any M>0, there exists an x>M such that f(x)<f(a(1)), which would prevent f from diverging to infinity. The proof goes as follows: choose a positive integer n such that M<10^n, and put x := 10^n*a(n+2), so that M<x. Then we conclude that f(x) = f(100^n*a(n+2)) < 10^n*f(a(n+2)) < 10^n*10^(-n-1)*f(a(1)) = f(a(1))/10 < f(a(1)), as desired. Roughly, if there were a sequence a(i) in an interval of finite length such that f(a(i)) tends to zero, we could no longer expect f to diverge to infinity in the strict sense. We can avoid this trouble by assuming the continuity of f, as we have accepted f is strictly positive on the entire interval (1,inf). In my opinion, it is basically impossible to discuss a transcendental property by using only algebraic properties, which is why it is transcendental! :P
Retired because of that deletion event. Projects (WIP RIP): VIP3 all-exits "almost capeless yoshiless", VIP2 all-exits, TSRP2 "normal run"
Player (80)
Joined: 8/5/2007
Posts: 865
Mister wrote:
Bobo the King wrote:
Note that this proof doesn't require a base case for induction or rely on the continuity of the sqrt and log functions. As long as we accept that the sqrt and log functions are definable and positive for all real numbers greater than 1, no matter our starting point, the function blows up to infinity for arbitrarily large x.
I don't think the continuity of f (or whatever like that) is superfluous; there seems to be a slight gap. First we see from the above proof that f(100*x)/f(x) = 10*log(x)/(2+log(x)) = 10-20/(2+log(x)) < 10, hence f(100^n*x) < 10^n*f(x) for any n. Assume we could choose infinitely many a(i), such that (1) 1<a(i)<a(i+1)<100>10*f(a(i+1)) for all positive integers i. Then we could show that for any M>0, there exists an x>M such that f(x)<f(a(1)), which would prevent f from diverging to infinity. The proof goes as follows: choose a positive integer n such that M<10^n, and put x := 10^n*a(n+2), so that M<x. Then we conclude that f(x) = f(100^n*a(n+2)) < 10^n*f(a(n+2)) < 10^n*10^(-n-1)*f(a(1)) = f(a(1))/10 < f(a(1)), as desired. Roughly, if there were a sequence a(i) in an interval of finite length such that f(a(i)) tends to zero, we could no longer expect f to diverge to infinity in the strict sense. We can avoid this trouble by assuming the continuity of f, as we have accepted f is strictly positive on the entire interval (1,inf). In my opinion, it is basically impossible to discuss a transcendental property by using only algebraic properties, which is why it is transcendental! :P
It took me a while to comprehend your post and the point you make is very subtle, but at first glance, I agree that you may have a point. I'm a bit busy at the moment, but would anyone else care to read over our two proofs and determine whether mine is valid? I can see why, with a pathological sequence ai, the limit definition might fail to show divergence. On the other hand, I'm satisfied that I've shown that if we start with "seed values" of x between 1 and 100, log(y) can be written as log(100^n*x) for some integer n and some value of x in (1, 100] and as n increases, this tends to infinity, regardless of the value of x as long as log(x)>0. All the reals are represented and they all tend towards infinity, even though they might bounce up and down in the interval (1,100], (100, 10000], and so on. Perhaps I'm overconfident, but I really thought the thrust of my proof was ironclad. I'll brush up on my real analysis in the meantime. Edit: Now that I have the time to do so, I've looked over Mister's post in more detail and I now see that he is correct. Rather than bat about arbitrary series and subtle arguments, I'll work with a more concrete example. Let sqrt(x) be any function on the interval (1, 100], with the only constraint being that it be positive in that domain. On the interval (100, inf), let sqrt(100*x) = 10*sqrt(x). Let log(x) be likewise arbitrary but positive on (1, 100], while log(100*x) = log(x) + 2 on the interval (100, inf). Using these facts, we can show that if f(x) = sqrt(x)/log(x), then f(100*x)/f(x) = 10*log(x)/(2+log(x)). Since both sqrt(x) and log(x) are arbitrary on (1, 100], I'll choose them such that f(x) linearly decreases from 1 to 0 on (1, 100) while f(100) = 5. If you were to graph this function, it would fall down, infinitesimally close to the x-axis as x approaches 100, then spring back up for x>100. It then crashes down in a similar manner as x approaches 10,000, springs back up, and crashes down once again as x approaches 1,000,000 and so on. Clearly, this function does not approach zero as x approaches infinity because it keeps dipping back down arbitrarily close to the x-axis. The problem is that even though the function is strictly positive, its infimum is zero. One simple way to fix this is to put in just one more requirement that inf(f(x)) be some number greater than zero over the interval (1, 100]. For the actual sqrt and log functions, f(x) never approaches zero because the limit as x approaches 1 of f(x) is infinity. I believe adding this last requirement fixes my proof, but now it doesn't seem nearly so elegant. It is not so easy to accept that f(x)>C because we are forced to deal with a zero over zero limit and this limit is best handled with L'Hopital's rule, which uses derivatives and my proof was not supposed to use any calculus. Having said all this, I'm still a little confused about why my proof doesn't work. Surely, I've shown that for any x in (1, 100], f(x)>0, and based on the recursive definition of f(x), f(100^n*x) grows arbitrarily large. I can accept that my original proof was incorrect, but I wonder if I've instead proven some weaker property. Oh well. Edit 2: As a minor addition to this post, this reminds me of the discussion of convergence in my first real analysis class. We considered some function defined piecewise as fn(x) = n - n^2*x on [0, 1/n] and fn(x) = 0 on [1/n, 1] Our professor then stated that this function converges to zero on the interval (0, 1] as n approaches infinity. This elicited some protest from students because the left "leg" clearly blows up. He was right, though, and that function was the lead-in to our discussion of uniform convergence, with uniform convergence being more powerful than the garden-variety type. Perhaps the concept of uniform convergence somehow applies here. Maybe my proof says that f(x) diverges but does not do so uniformally (although I have a feeling I'm abusing terminology to invoke "uniform divergence").
Skilled player (1651)
Joined: 7/25/2007
Posts: 299
Location: UK
My current data points have a mean value of 4.4, I add an additional point, 9.2 to the set, and the mean becomes 4.7. How many data points do I have?