Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Famously, lim(x->∞) (1 + a/x)bx = eab This got me thinking what happens if that 1 were something else. Not a very hard problem given that even I could figure it out, but thought you might find it a mildly interesting question.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
This is a very easy problem mathematically speaking, but that's not the challenge. The problem is: What is the area of the red square compared to the area of the green square? The challenge is not finding the answer. The challenge is to find a geometric proof (or, could it perhaps be more accurate to say "geometric argument"?) that requires no math at all. Perhaps put your answer in spoiler tags, if other people would want to think about it as well.
Player (36)
Joined: 9/11/2004
Posts: 2630
Warp wrote:
This is a very easy problem mathematically speaking, but that's not the challenge. The problem is: What is the area of the red square compared to the area of the green square? The challenge is not finding the answer. The challenge is to find a geometric proof (or, could it perhaps be more accurate to say "geometric argument"?) that requires no math at all. Perhaps put your answer in spoiler tags, if other people would want to think about it as well.
The red square is half the area of the green square, you can cut the red square in half down a diagonal and place the hypotenuses on two edges of the green square to see geometrically that it covers half.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Editor, Expert player (2072)
Joined: 6/15/2005
Posts: 3282
Warp wrote:
Famously, lim(x->∞) (1 + a/x)bx = eab This got me thinking what happens if that 1 were something else.
Not that hard. Consider lim(x->∞) (c + a/x)bx. If c>0, then: lim(x->∞) (c + a/x)bx = lim(x->∞) cbx lim(x->∞) (1 + a/(cx))bx = eab/c lim(x->∞) cbx, which results in: ● ∞, if c>1 and b>0, or 0<c<1 and b<0, ● 0, if 0<c<1 and b>0, or c>1 and b<0, ● eab, if c=1, ● 1, if b=0. If c=0, then: lim(x->∞) (a/x)bx = e^(lim(x->∞) bx ln(a/x)), which results in: ● 0, if a≥0 and b>0, ● ∞, if a≥0 and b<0, ● 1, if b=0. All other cases result in real powers of negative numbers or 00, which are undefined.
Warp wrote:
The challenge is not finding the answer. The challenge is to find a geometric proof (or, could it perhaps be more accurate to say "geometric argument"?) that requires no math at all.
Here's an image I made which shows why the green square is twice the area of the red square:
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
gotta go fast
Warp wrote:
Famously, lim(x->∞) (1 + a/x)bx = eab This got me thinking what happens if that 1 were something else. Not a very hard problem given that even I could figure it out, but thought you might find it a mildly interesting question.
If 1>|c|, then lim((c+a/x)x,x,+∞) has the form (something that eventually is less than 1 in absolute value)(something that approaches +∞) and therefore approaches 0; the case for |c|>1 is analogous (it approaches a directed infinity with the same polar angle as c), and in the case where |c|=1 but c does not equal 1, there is no limit.
i imgur com/QiCaaH8 png
Player (36)
Joined: 9/11/2004
Posts: 2630
As noted by thatguy, Fibonacci numbers have the property that Fmn is divisible by Fm and Fn (but not necessarily FmFn, (F3)2 ∤ F9 is a simple counterexample). Of course, because Fibonacci numbers have exponential growth rates, Fmn is much larger than Fm and Fn and has other factors. If we consider the set of divisors of some positive integer n (not including n itself), we can generate several numbers which are divisors of Fn. If we find the LCM of these numbers, we have a number comprised of all of the "Fibonacci factors" of Fn. Dividing Fn by this number we have a number comprised of the "left Over factors." Let's call this number On. With the exception of O6 = 4 these numbers seem to be square free. Are they? A short python program and a sample run can be found at https://pastebin.com/iKjBq13C
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Editor, Expert player (2072)
Joined: 6/15/2005
Posts: 3282
OmnipotentEntity wrote:
With the exception of O6 = 4 these numbers seem to be square free. Are they?
In fact, prime powers dividing Fibonacci numbers has been studied before. What is known so far may surprise you. There are two ways in which On can be non-square-free. For a prime p: (1) n is the smallest positive integer i for which p divides Fi, and p2 also divides Fn. (2) n is the smallest positive integer i for which pk divides Fi (k≥2), and pk+1 also divides Fn. Start with case (2) first. Case (2): We have the identity (when m≥0) Fmn = Sum[i=1 to m](C(m,i) Fni Fn-1m-i Fi) (*) (Click for better mathematical formatting; note that C(m,i) is binomial coefficient .) See bottom of post for details. I won't prove (*) unless someone asks. Let pk (k≥1) be the highest power of p dividing Fn. Take (*) mod pk+1: Fmn ≡ mFnFn-1m-1. Since consecutive Fibonacci numbers are relatively prime, the smallest m for which pk+1 divides the RHS is p. Let m=p and take (*) mod pk+2: Fpn ≡ pFnFn-1p-1 + C(p,2)Fn2Fn-1p-2. If p is an odd prime (so that p divides C(p,2)), or if k>1, then C(p,2)Fn2Fn-1p-2 ≡ 0 (mod pk+2) and so Fpn ≡ pFnFn-1p-1 ≢ 0 (mod pk+2). So whenever Fn is the smallest Fibonacci number divisible by pk (k≥1), and pk+1 does not divide it, then the smallest Fibonacci number divisible by pk+1 is Fpn, and pk+2 does not divide it. The only exception is when p=2, k=1 and that corresponds to O6 = 4, a non-square-free number. Case (1): Here's the part which is up in the air and nobody knows. If n is the smallest m for which p divides Fm, and p2 divides Fn, then p is a Wall-Sun-Sun prime. (Can be shown π(p)=n or 2n or 4n, where π(p) is the Pisano period as defined on that page.) No Wall-Sun-Sun primes are known to exist and there are none below 9.7×1014. So all this confirms that O6 = 4 is very likely the only exception within a reasonable range. ------------------------------------ About (*): (*) is generalized as (when m≥0): Fmn+r = Sum[i=0 to m](C(m,i) Fni Fn-1m-i Fi+r) (**) which is identity (7) in this paper. (**) can be proved using Fm+n = Fn+1Fm + FnFm-1 along with some clever induction.
Player (36)
Joined: 9/11/2004
Posts: 2630
(1) n is the smallest m for which p divides Fm, and p2 also divides Fn.
Does this imply that n = m? That seems nonsensical, so I must be misunderstanding. What is the relationship between n and m?
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Editor, Expert player (2072)
Joined: 6/15/2005
Posts: 3282
OmnipotentEntity wrote:
(1) n is the smallest m for which p divides Fm, and p2 also divides Fn.
Does this imply that n = m? That seems nonsensical, so I must be misunderstanding. What is the relationship between n and m?
What I should have said was: "n is the smallest positive integer i for which for which p divides Fi" or "n is the smallest {i is a natural number: p divides Fi}" or "p divides Fn and p does not divide Fi for 0<i<n." I have edited the above post.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Warp wrote:
The classic proof that you often hear that there are infinitely many primes is a simplified version of Euclid's argument. In other words, the "if there are finitely many primes, multiply all of them and add one, and you get a number that's not divisible by any of those primes, therefore there has to be at least one more prime" thingie. I don't remember when or where I heard this "proof" for the first time, but even back then it bothered me when it just makes the claim that you get a number that's not divisible by any of the listed primes, without even an attempt at a cursory informal argument about why that's the case.
blackpenredpen made a video proving this, and for the first time in my life someone actually presented that final part of the proof in a way that's actually easy to understand. In summary (using my own words): Let's assume there are finitely many primes. Let's take the product of all these primes and call it X. Let's also take Q = X+1. Q itself can't be prime, else the assumption would be wrong. Therefore it has to have a prime factor P. According to our assumption, P must be in the list of primes. Therefore, P divides both X and Q (because P is one of the factors of X, as defined above, and also a factor of Q, as deduced above.) It thus follows that P also divides Q-X. An easy way of understanding why is that since P is a factor of both, it can be taken out of the subtraction as a common term (and this is the part that was crucial for me finally understanding this). But we know what Q-X is: It's 1 (because Q=X+1). Thus it would mean that P divides 1, which is impossible. It was explained before (and the original proof by Euclid mentions it), but that one part above finally made it clear.
Player (36)
Joined: 9/11/2004
Posts: 2630
It's easier if you think of it in terms of modular arithmetic. Any number n that is divisible by p is congruent to 0 in mod p. So if we have a finite list of primes, p_1, p_2, ... p_n then we can construct a number which is congruent to 0 in every mod p_i. When you add one to this number you get a number congruent to 1 in every mod p_i. Which is therefore not divisible by any p_i.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
There is one very interesting proof of the infinitude of primes using sines, that I like very much, even though it is just a contrived way of writing Euclid's original proof. Suppose the number of primes is finite. Then, let x denote the product of sin(pi/p) for all primes p. Since p >= 2, then pi/p <= pi/2 and 0 < sin(pi/p) < 1. Therefore, we must have x > 0. Now, let K denote the product of all primes p. Certainly, K/p is an integer, and because the sine function has a period of 2pi, we have sin(pi/p) = sin(pi(1+2K)/p) So, we can rewrite x as x = prod_p sin(pi(1+2K)/p) Now, since our list contains every possible prime, there should be a prime on the list where (1+2K)/p is an integer. For this prime, the factor we should evaluate to compute x is sin(n*pi), with n integer, which is just 0. Therefore, x=0, contradicting our first observation that x>0. So, there is an infinite number of primes.
Tub
Joined: 6/25/2005
Posts: 1377
If you need to understand why Q isn't divisible by any of the established primes, then the simplest intuition I know is this: For every integer p, if X is a multiple of p, then X+p is the next multiple, and any number in between X and X+p (including X+1 for p > 1) is not divisible by p. This gets cleaner and more rigorous by invoking modular arithmetics, but you don't need to know modular arithmetics (nor trigonometry) to understand it.
m00
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I'll post a challenge that I had some fun doing, and actually has applications (in my work, at least). The original theory is found in this awesome paper. Consider a binary alloy. If you don't know what that is, it's the mixture of two solids. Let us call them A and B, where each of these solids has only one atomic element in its chemical composition. Although not a fancy topic like black holes and quantum gravity, most of the theory for alloys is very speculative and remains mostly unsolved. Anyway, the alloys which we can say something very nontrivial about are those with substitutional disorder, those that look like this: The idea is simple. A binary alloy with substitutional disorder maintains the crystal structure of the pure solids, the only disorder comes from different atomic species occupying random lattice sites. The only macroscopic property of the alloy is its composition, in the example it could be the ratio of black atoms to the total number. The objective of any model is to find out how the alloy's other properties (like conductivity or optical absorption) depend on its composition. For technical reasons, it is much more simple to calculate things in a system that is periodic, which the alloy clearly is not. So, in the example above, we would approximate it as a periodic system. For example, we could pick a 2x2 square, with black and white atoms at some positions, and repeat it indefinitely in both dimensions. We do this for all 2^4=16 possible 2x2 squares (yes, some of those are actually equivalent to each other, and I once had to develop a code to reduce the number of possibilities to test, it is a very hard problem for realistic crystal structures, so we should ignore it now), and calculate whatever we want in each of these 16 configurations. In the end, we do an average to obtain the value in the mixture. Let us denote the alloy composition (fraction of black atoms) by x. The property you have calculated in the i-th configuration has a value P_i. If we have the probability of occurrence x_i of the i-th configuration, which depends on x (that should be evident, if x is very close to 1, the probability of a configuration with lots of black atoms should be much higher than in the case where x is very close to 0), then we can calculate the average in the alloy at composition x as: P(x) = sum_i x_i(x) P_i The thing is, it is not so simple to calculate x_i(x). Some alloys really like mixing together, in this case the values of x_i would very close to random. Others are like water and oil and don't like to mix at all. In this case, the probability for mixed configurations would be very close to 0. The paper I linked to in the beginning of the post tells you how to calculate x_i(x). The idea is the following: you need to provide some energy to create a given configuration, which we call the formation energy E_i. If this energy is very large, the probability of the configuration occurring is very low, if this energy is small, the probability is high. Alloys also have a growth temperature T. It varies a lot depending on the alloy, some are as high as 1000 K, others as low as 100 C (you can cook them). If the growth temperature is very large, differences in energy shouldn't matter at all and the alloy should be completely random. Suppose the total number of atoms in any configuration is N (in the 2x2 square example, N=4), and consider that the i-th has n_i black atoms. The formula for calculating x_i(x) is (define b=1/kT, where k is Boltzmann's constant): x_i = (a^n_i)*exp(-b*E_i)/ (sum_j (a^n_j)*exp(-b*E_j)) (1) a is a positive non-dimensional constant. To find it, you must solve a polynomial equation (which is easy to do numerically), that you obtain by imposing the constraint sum_i x_i*n_i = N*x, (2) this just means that the average number of black atoms should equal the number specified in the composition x. Now, to the actual challenge, which is something that helped me a lot when I figured out (I can tell the story if you guys want): Using the system of equations (1) and (2), prove that, if you shift all energies by a linear function of the number of atoms, that is, E_i -> E_i + A + B*n_i, the probabilities x_i obtained from the equations are the same.
Joined: 7/13/2014
Posts: 36
Consider a partitioning function Pk(n), which lists all the ways you can write a positive integer n as the sum of k other positive integers, where order doesn't matter. For example:   P2(5) = { (1,4) , (2,3) }   P3(5) = { (1,1,3) , (1,2,2) }   P2(6) = { (1,5) , (2,4) , (3,3) } For any given list, you can find the LCM of each individual partition, for example:   LCM(1,4) = 4   LCM(2,3) = 6 Those are all of the 2-partitions of 5, and the biggest LCM is 6. That is,   Max( LCM( P2(5) ) ) = 6 For the sake of brevity, let's condense all of that into a single function:   ak(n) = Max( LCM( Pk(n) ) ) In general, you would expect ak(n) to get bigger as n gets bigger, and much faster than the lower bound of n-k+1 guaranteed by the partition (1, 1, ..., n-k+1). The bigger n is, the more ways there are to write it as the sum of smaller positive integers, and thus the more "chances" there are to have a big LCM. But it's not always the case that ak(n) grows. For example:   a2(5) = 6 ≥ 5 = a2(6)   a3(10) = 30 ≥ 21 = a3(11) So my questions are twofold:
  • What is the asymptotic behavior of ak(n)? Is there a nice closed form expression?
  • How often is it true that ak(n) ≥ ak(n+1)? If it only happens finitely many times, when is the last one?
Player (36)
Joined: 9/11/2004
Posts: 2630
According to OEIS, the asymptotic behavior of ak(n) seems to be nk/kk. I do not know why this would be the case. http://oeis.org/A129647 http://oeis.org/A129648 http://oeis.org/A129649 http://oeis.org/A129650 It should be noted that the behavior of a5(n) seems to exhibit non-monotonicity quite a bit.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Player (80)
Joined: 8/5/2007
Posts: 865
OmnipotentEntity wrote:
According to OEIS, the asymptotic behavior of ak(n) seems to be nk/kk. I do not know why this would be the case.
I actually think this is fairly intuitive. For large n, the partition Pk(n) that results in the largest LCM would be one "near" a distribution where all k elements are close to n/k. For example, if I want four numbers that add to 100 and multiply to the largest possible value, I would make them all 25. The LCM function just mucks them up a little bit and so I might choose something like 19, 25, 27, and 29, numbers all clustered around 25. (The LCM of these numbers as a fraction of (100/4)^4 is 0.95.) One expects that for sufficiently large n relative to k, these numbers will be arbitrarily close to n/k. There are k such numbers in the set, so multiplied together, we get (n/k)^k. This gives us an idea of how to search for ak(n); start with all elements around n/k, then redistribute the values slightly until we reach k numbers that are coprime. What I think is interesting is that for k<<n, ak(n) settles pretty closely to (n/k)^k for the reason I outlined above. For k~=n, we can also see that ak(n) is pretty well behaved because your partition is going to become saturated with 1s. There should be a "sweet spot" for k (as a function of n) that makes this function in some sense "maximally interesting", where the partition that produces the greatest LCM is extremely nontrivial and does not settle down for large n. I'm going to guess it's something like k = n/2 or k = sqrt(n), but since I haven't really defined this sweet spot, I'm not quite sure what to look for.
Player (36)
Joined: 9/11/2004
Posts: 2630
Well, if we define a sequence of ai(n) from i=n to i=1 for all n < 85 we get a series of graphs. (Note, as mentioned, I reversed the x axis essentially to get everything to line up nicely and highlight the similarities between subsequent graphs.) It's clear that there's some point at which the graph peaks. And it seems to be converging on a single point. So let's highlight that: However, that's not what you're talking about. Right? There's a sweet spot where the function is doing "better than expected." So let's look at this graph on a log scale: So it looks as if the function has a bit of a hump. It starts of faster than it finishes. So it could be said that the sweet spot is where ak(n) / fitk(n) is maximized. (Or subtraction too, I just think that division is a better fit because we're dealing with exponentials maybe?) You mentioned it might be at around sqrt(n), so here's that last graph with 1/sqrt(n) added, seems suggestive: If anyone wants to try to prove these relationships, that would be awesome. I'm pretty useless when it comes to proofs.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Editor, Expert player (2072)
Joined: 6/15/2005
Posts: 3282
Bobo the King wrote:
What I think is interesting is that for k<<n, ak(n) settles pretty closely to (n/k)^k for the reason I outlined above.
If indeed ak(n) is well-approximated by (n/k)k, then the maximum would occur at k=n/e (by setting the derivative of ln((n/k)k) to be 0 and solving for k). However, I'm not sure that the behavior of ak(n) is well-approximated by (n/k)k when k is not significantly small compared to n.
Player (80)
Joined: 8/5/2007
Posts: 865
I think that FractalFusion and OmnipotentEntity may have misinterpreted my post. My goal is not to find a function for k in terms of n that maximizes ak(n) (which would be k = n/e) but rather to cause the wildest fluctuations in ak as a fraction of (n/k)^k. As I suggested, we can begin our investigation with two sample functions:
  • k constant. In this case, we expect ak(n)*(k/n)^k to approach 1 because we should be able to find the k values that maximize their LCM really close to (n/k) for large n. For example, if I'm partitioning 1,000,000,000 into 10 pieces, I should be able to find all 10 numbers really close to 100,000,000. Searching for these 10 values would also take a relatively short amount of time since we know to look near 100,000,000. Because the asymptotic behavior is well-behaved and the maximum LCM can be efficiently computed, I characterize this function as, in some sense, trivial.
  • k ~= n. In this case, we expect ak(n)*(k/n)^k to approach 0 because our partition will naturally contain a lot of 1s. I may be wrong, but I think that for sufficiently large n, the growth of (k/n)^k being exponential will vastly outstrip the relatively slow addition of 2s, 3s, and larger numbers to our partition. Again, we have a pretty good idea of how to find our partition that maximizes the function: start with k 1s, then add the remaining n-k units throughout the partition, preferentially bringing elements of the set up to prime numbers. As with the above function, this is both easy to approximate and easy to compute a partition, so I again characterize this function as trivial.
But something interesting must (?) happen somewhere in the middle. There should be a function of k in terms of n that, when we try to find the partition that maximizes the LCM, is a complete nightmare. It may even require including many elements in our best partition that are not coprime, resulting in wild fluctations in ak(n)*(k/n)^k. I must make two concessions. First, I have to admit that I didn't understand many of OmnipotentEntity's graphs. Could you be more explicit about what you're graphing at each step? And second, I've used some really squirrely language throughout this post. I can't really attempt to define "wild fluctuations" or "interesting". The closest I can come is where I say finding the right partition is "a complete nightmare". In this sense, I suppose I would be looking for the function k in terms of n that maximizes the computational complexity of finding the partition that maximizes the LCM. But perhaps that's distinct from the the function that produces the wildest fluctuations in ak(n)*(k/n)^k. I hope that's more clear.
Joined: 7/13/2014
Posts: 36
OmnipotentEntity wrote:
According to OEIS, the asymptotic behavior of ak(n) seems to be nk/kk. I do not know why this would be the case.
I happen to be the author of those OEIS entries. I mainly didn't link them in the OP because I didn't want to predispose anyone to thinking that information was gospel. I never had anything other than Bobo's statistical argument for the asymptotic behavior of ak(n).
It should be noted that the behavior of a5(n) seems to exhibit non-monotonicity quite a bit.
As k gets larger, it takes longer and longer for the function to settle down. It also takes longer for the function to "get going" and start being interesting, so really there are two thresholds worth considering: not just the last index at which ak(n+1) < ak(n), but also the first! So far I've had no success in elucidating the boundaries of the "interesting region". I'm really intrigued by Bobo's idea of the point of maximal interestingness, though, and your animated graphs are really cool. (Hmm, are there an n and k that have the most interesting "most interesting point"?) Another thing I tried to prove was that if you observed monotonicity for "long enough" (once you were within the interesting region, of course), you could be assured the function was fully monotonic at that point, but I didn't really get beyond a statistical argument with that, either. (That was partially motivated by trying to make calculating A129651 easier.)
Player (80)
Joined: 8/5/2007
Posts: 865
Number theory's not really my thing, but I think I have a good candidate for the "maximally interesting" function for k. Choose k = n/ln(n). The inspiration comes from the prime-counting function. The number of primes less than n is approximated by n/ln(n) and since we basically want the numbers in our set to be coprime so that their product is maximized, the problem is effectively reduced to "find all prime numbers less than n". In fact, it's even a little bit harder than that because there may be more or fewer primes than that. If we use the sieve of Eratosthenes, these primes can be found in O(n*log(log(n))), which is actually pretty efficient. (I don't know if the SoE is the most efficient algorithm-- I've heard that there are optimizations of it but I don't know if they affect the overall time complexity.) OmnipotentEntity, would you be so kind as to produce a graph of ak(n)*(k/n)^k vs. n where k is round(n/ln(n)) and "round" refers to rounding to the nearest integer? Edit: With regards to OmnipotentEntity's question below...
OmnipotentEntity wrote:
Well, because of combinatoric explosion, the larger values of i and n tend to make computation longer. However, this cannot be what you mean. Can you elaborate?
I disagree that larger values of i tend to make the computation longer. If we have i=n, there is exactly one partition in which we use n 1s. If we have i = n-1, it's the same story-- n-2 1s and one 2. Things get ever so slightly nontrivial when we have i = n-3, in which case we could have n-3 1s and one 3 or n-4 1s and two 2s. Clearly, an-3(n) should always be 3 for sufficiently large n because the second 2 would not contribute to a larger LCM and we therefore want that 3 in there. Anyway, I just mean to illustrate that the partitions are maximally complex for intermediate values of k (or i). Edit 2: My candidate function of n/ln(n) is fine, but I now see that a much better approximation of the prime-counting function is li(n), the log-integral function, which is the integral from 0 to n of dx/ln(x). I mistook the two functions as identical. OmnipotentEntity, if it's not too much trouble, could you (instead) produce the graph I requested except using round(li(n))? I'd do it myself, but I don't really have a good program to do it in at this time. Edit 3: Incorrectly defined the li function. Also, FYI, I'm working on producing a graph ak*(k/n)^k vs. n when k = li(n). I'm using Scilab, which I forgot I downloaded but never used much. I'm not sure I'll be able to figure it out on my own, however.
Player (36)
Joined: 9/11/2004
Posts: 2630
Bobo the King wrote:
First, I have to admit that I didn't understand many of OmnipotentEntity's graphs. Could you be more explicit about what you're graphing at each step?
Hey that's ok! It just means I was a bit unclear. This graph is essentially taking n to be time and graphing ai(n). The left side is high values of i near n. And the right is low values. So the point on the x axis marked as 1 is actually n, 2 is actually n-1, and so on. As the value of i decreases, the value of ai(n) increases until it reaches a peak, then rapidly drops off. Each frame is labeled with a value which is pretty stupidly calculated in retrospect. Essentially, if you subtract it from 1 you get the ratio i/n such that ai(n) is maximized within the ak(n) family of functions. The second image: These are the stupidly calculated values from before, except isolated and graphed against increasing n. So subtract these from 1 and you'll get something useful... maybe. This is exactly the same graph as the first one, except placed on a log axis, to highlight the exponential growth of the function. This is exactly the same graph as the third one, except with an exponential curve that connects an(n) and aimax(n). This is to highlight the fact that this function actually grows faster at the beginning than at the end, giving the "best performing" value for i with respect to n. "Best performing" meaning that the value of ai(n) for this particular i the highest compared to the line fitted out of all the ak(n) functions. The last two images are again poorly calculated, subtract them from 1 and you might get something better. But essentially, when subtracted from 1, this is the ratio i/n such that ai(n) is the "best performing" value of the family of functions ak(n)
The closest I can come is where I say finding the right partition is "a complete nightmare". In this sense, I suppose I would be looking for the function k in terms of n that maximizes the computational complexity of finding the partition that maximizes the LCM.
Well, because of combinatoric explosion, the larger values of i and n tend to make computation longer. However, this cannot be what you mean. Can you elaborate?
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Player (36)
Joined: 9/11/2004
Posts: 2630
As requested: around(ln(n))(n) around(li(n))(n)
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Player (80)
Joined: 8/5/2007
Posts: 865
Derp. I just realized that the primes less than n don't form a partition of n (unless n=2). I think the function we're actually looking for is something like the inverse of this sequence. I dunno, I'm kind of tapped out on this problem.

1731119823