Banned User, Former player
Joined: 3/10/2004
Posts: 7698
Location: Finland
Tub wrote:
I'm glad you noticed. You know what else makes one sound like an a-hole? Asking a question, then ignoring the answers that others have spent time to provide for you. Above you were two proofs, one proving that no more than 30.6% of all fibonacci numbers can be removed; another proving that at most one can be removed. Yet you posit an argument suggesting to remove 1/3 of all numbers. Why? I doubt it'd lead to valuable insight. I also doubt it'd help you understand the issue. I doubt that it can, in any way, be considered a good contribution to the thread.
When someone asks a question about the problem at hand, a normal person would simply show the error in the deduction. An asshole acts all smug and simply says "I already proved that you can only remove one number" and not indicate what the error was, as if the objection were beneath him. At least you admit acting like a smug asshole. Well, either start acting like a normal sensible person or just fuck off.
Player (79)
Joined: 8/5/2007
Posts: 865
Warp, I like you. I really do. And although Tub's response to you was a bit acerbic, you are now in the process of derailing two threads in the Off Topic section. The common factor here is you. Drop it, please. (Same goes for you, Tub, if you're tempted to reply.)
Banned User, Former player
Joined: 3/10/2004
Posts: 7698
Location: Finland
Bobo the King wrote:
Warp, I like you. I really do. And although Tub's response to you was a bit acerbic, you are now in the process of derailing two threads in the Off Topic section. The common factor here is you. Drop it, please. (Same goes for you, Tub, if you're tempted to reply.)
Now you left me wondering what the other thread you are referring to is. I honestly have no idea. "Derailing the thread" is nonsense, to be honest. I made a completely on-topic post, someone acted like an asshole towards me, and I responded in kind. Sure, this results in a couple of off-topic posts, which will probably quickly be forgotten (I for one am not going to make it into any kind of prolonged flamewar even in the unlikely case that he would be ready to.) That's hardly "derailing the thread". The thread isn't going to get "derailed" because of an exchange of unpleasantries.
thatguy wrote:
In this case, Warp has just forgotten that, when modifying an existing sum of unique Fibonacci numbers, it is not necessarily okay to make the substitution F(n) -> F(n-2) + F(n-1) because F(n-2) or F(n-1) may already appear in the sum.
You are right. I didn't think that by making the substitution the terms in the sum might not be unique anymore because F(n-1) or F(n-2) may already be there. (Or rather, the latter, because as was established earlier, the sum can consist of non-consecutive fibonacci numbers.) Thanks for pointing that out.
Tub
Joined: 6/25/2005
Posts: 1377
Since Warp still doesn't seem to understand why I'm actually angry at him (and wouldn't accept any other reason except that I'm an asshole), I've taken it to PM. Apologies to all, including Warp, for not doing so sooner. I still maintain that I consider the attitude Warp displayed towards this thread over last few pages unacceptable; even though it may not have been the attitude he intended. And although it has always been an incredibly dick move to post the following link in an internet argument, I'll do so anyway. http://www.catb.org/esr/faqs/smart-questions.html Though if you disagree with me, feel free to tell me via PM. With that, back to math.
Scepheo wrote:
Basically, my proof consists of two parts: 1) Proof that we can also make the numbers up to f(n) for any given f(k) removed. 2) Proof that we could also remove f(n) instead and still make the sums.
Even after rereading your proof, I must admit that I don't understand the invariant you're claiming and the inductive step you're doing. To replace F(n), you need to get a sum for b-F(n) that doesn't contain F(n-1) nor F(n-2), but your assumption only allows you to remove one. In other words:
Scepheo wrote:
I find all you people's proofs hard to follow, honestly.
right back at you. ;) I've been meaning to post my proof in full, without the handwaving, but it's saved on the other computer, which is offline for today.
m00
Player (142)
Joined: 7/16/2009
Posts: 686
Tub wrote:
Even after rereading your proof, I must admit that I don't understand the invariant you're claiming and the inductive step you're doing. To replace F(n), you need to get a sum for b-F(n) that doesn't contain F(n-1) nor F(n-2), but your assumption only allows you to remove one.
Dammit, your right. The proof worked in my head, but it is, indeed, incorrect. Let me fix that for you, and also proof that we cannot remove more than one element. Note that the entire proof is repeated (and reformatted), as this is the version now on my computer: First, let's proof we can remove any given one element from the Fibonacci sequence and still construct any natural number. Let's take S(x, k) a sequence of Fibonacci numbers where: 1) The sum of the elements of S(x, k) = x 2) S(x, k) does NOT contain F(k) Now we assume that for any k < n, x < F(n) there exists an S(x, k). It is easy to see that this holds for n = 5 (k < 5, x < 5) We then want to prove that this also holds for k < n + 1, x < F(n + 1). We do this by dividing the proof in two conditions: A) k < n B) k = n A) k < n We already know that for any given x < F(n) there exists a S(x, k) (this is our initial assumption. We can then easily that there is a solution for S(F(n), k), namely F(n). We can also easily see that for any y | F(n) < y < F(n + 1) a solution for S(y, k) is S(y - F(n), k) + F(n), as 1 <= y - F(n) < F(n). B) k = n In this case, for any given y | F(n) < y < F(n + 1), we can use the solution S(y, k) = S(y - F(n - 1), n - 1) + F(n - 1). We know a solution for S(y - F(n - 1), n - 1) exists, as y - F(n - 1) < F(n) and n - 1 < n. A possible solution for S(F(n), y) is, of course, F(n - 1) + F(n - 2). It then follows by induction we can remove any Fibonacci number and still construct all the natural numbers. Secondly, we want to proof that we can only remove one Fibonacci number. If we remove any two terms F(a) and F(b), a > b, it follows that the largest number we can make using only F(i) | i < a is the sum of all terms up to F(a - 1), minus F(b). The sum of all terms up to F(n) is F(n + 2) - 1, so this would be: F(a + 1) - F(b) - 1 This means we cannot make F(a + 1) - F(b) using any terms F(i) | i < a, and we also cannot use any terms F(i) | i > a as F(i) >= F(a + 1) > F(a + 1) - F(b) As such, it follows that we can only remove a single element from the set of Fibonacci numbers.
Tub
Joined: 6/25/2005
Posts: 1377
Looks solid and understandable. Unfortunately, I'm afraid thatguy has already run off with the star. Sorry! ;)
m00
Tub
Joined: 6/25/2005
Posts: 1377
As promised, here's my proof: Requirements:
  • every natural number has a sum of distinct fibonacci numbers, as generated by the greedy algorithm
  • F(1) + .. + F(k-1) = F(k+1) - 1
Let's assume that F(k), k > 2 is the first fibonacci number we're trying to remove. Show that all numbers can still be represented with F(1) .. F(k-1) and F(k+1) and beyond. In no particular order:
  • 1 .. F(k) - 1: using the greedy algorithm, since all required numbers < F(k) are available
  • F(k): F(k-1) + F(k-2), possible
  • F(k+1) - 1: Using F(1) + .. + F(k-1) = F(k+1) - 1, we know that the sum of all usable numbers smaller than F(k+1) is just barely large enough to reach the highest value we need.
  • F(k) + 1 .. F(k+1) - 1: by taking all summands up to F(k-1) (sum F(k+1)-1), then removing the summands contained in a sequence for 1 <= b <= F(k)-1, we can construct sums from [F(k+1)-1] - [1] to [F(k+1)-1] - [F(k) - 1]. The sequences for b exists per the first bullet point. The lowest number representable with this method is [F(k+1)-1] - [F(k) - 1] = F(k+1)-F(k) = F(k-1), which is lower than F(k) + 1, so this whole interval can be represented.
  • F(k+1): is available as a number, and is thus its own sum.
  • beyond F(k+1): we use the greedy algorithm to get our number below F(k+1), then use the modified strategies for the remainders above.
So, removing any number with k > 2 is possible, and we already know that F(1) = F(2) = 1 can be removed. The third bullet point above is interesting: once we removed a number, we can not remove any higher number F(j), j > k, ever, because we wouldn't be able to represent F(j+1) - 1.
m00
Banned User, Former player
Joined: 3/10/2004
Posts: 7698
Location: Finland
I know this isn't in any way a new idea, but just for the fun of it I made a small program that draws the so-called ulam spiral, but with composite numbers colored as well. Each pixel is colored with a shade of gray depending on how many terms there are in its prime factorization. The shade of gray is 255/n, where n is the amount of prime factors. (Ie. it's 255 for primes, 127 for numbers with 2 factors, 85 for numbers with 3 factors and so on.) In other words, the darker the pixel, the more prime factors it has. Besides the traditional prime (ie. white) diagonal patterns, I find it interesting that there are some relatively wide diagonal dark areas. These seem to be diagonals where there are many numbers with a significant amount of prime factors. Another interesting detail is that this coloring also emphasizes many horizontal and vertical lines with lots of primes in them (and which are not so evident in an image where only the primes are drawn.)
Player (172)
Joined: 12/28/2007
Posts: 235
Location: Japan, Sapporo
Warp wrote:
I know this isn't in any way a new idea, but just for the fun of it I made a small program that draws the so-called ulam spiral, but with composite numbers colored as well. Each pixel is colored with a shade of gray depending on how many terms there are in its prime factorization. The shade of gray is 255/n, where n is the amount of prime factors. (Ie. it's 255 for primes, 127 for numbers with 2 factors, 85 for numbers with 3 factors and so on.) In other words, the darker the pixel, the more prime factors it has. (image) Besides the traditional prime (ie. white) diagonal patterns, I find it interesting that there are some relatively wide diagonal dark areas. These seem to be diagonals where there are many numbers with a significant amount of prime factors. Another interesting detail is that this coloring also emphasizes many horizontal and vertical lines with lots of primes in them (and which are not so evident in an image where only the primes are drawn.)
It's much easier to explain why composite numbers form vertical/horizontal (half) lines. For example, if we put numbers in an Ulam spiral as follows:
3 2 9
4 1 8
5 6 7
then we can find a half line starting from 18, 39, 68,.... This half line can be expressed in n as 4n^2+9n+5, which factorizes as (4n+5)(n+1). Hence every number x on this half line must have at least two factors. Moreover, n+1 is even if n is odd, so x often has at least three factors. In general, the numbers on some half line can be expressed by a quadratic polynomial, and polynomials corresponding to dark lines tend to be decomposable.
Retired because of that deletion event. Projects (WIP RIP): VIP3 all-exits "almost capeless yoshiless", VIP2 all-exits, TSRP2 "normal run"
Banned User, Former player
Joined: 3/10/2004
Posts: 7698
Location: Finland
I tested what happens if I only mark the twin primes in the ulam spiral. The result was rather interesting (if not even hypnotic.)
Joined: 2/3/2013
Posts: 320
Location: Germany
I don't want to turn this into a "help me with my Maths homework" thread or the like, but I've been stuck on this last task of an Analysis exam for quite some time and can't find a solution.. The task is as follows: Let f(x) := e^(cos x). Choose coefficients a,b,c element R such that: g(x) := 1/x^2 * abs(f(x) - (a + b*x + c*x^2)) --> 0 for x --> 0 Note: abs() is the absolute value function, --> denotes convergence. My conclusion is that this task is impossible (and the way it is stated suggests there is a solution): As x --> 0, f(x) --> e (because cos x --> 1). We get (g is > 0 and continuous on (0,infinity) so we can replace f(x) with e): 1/x^2 * abs(e - (a +b*x + c*x^2)) Because constants won't serve us a purpose, choose a := e: 1/x^2 * abs(-b*x - c*x^2)) = 1/x^2 * abs(-1 * (b*x + c*x^2)) = (b*x + c*x^2)/x^2 = b/x + c*x/x = b/x + c --> infinity for x --> 0 regardless of b or c. What am I missing here? Edit: Imagine there is a lim_x-->0(..) on each line (and "= infinity" on the last) to obtain a more correct way of writing this down...
All syllogisms have three parts, therefore this is not a syllogism.
Player (79)
Joined: 8/5/2007
Posts: 865
RGamma wrote:
I don't want to turn this into a "help me with my Maths homework" thread or the like, but I've been stuck on this last task of an Analysis exam for quite some time and can't find a solution.. The task is as follows: Let f(x) := e^(cos x). Choose coefficients a,b,c element R such that: g(x) := 1/x^2 * abs(f(x) - (a + b*x + c*x^2)) --> 0 for x --> 0 Note: abs() is the absolute value function, --> denotes convergence. My conclusion is that this task is impossible (and the way it is stated suggests there is a solution): As x --> 0, f(x) --> e (because cos x --> 1). We get (g is > 0 and continuous on (0,infinity) so we can replace f(x) with e): 1/x^2 * abs(e - (a +b*x + c*x^2)) Because constants won't serve us a purpose, choose a := e: 1/x^2 * abs(-b*x - c*x^2)) = 1/x^2 * abs(-1 * (b*x + c*x^2)) = (b*x + c*x^2)/x^2 = b/x + c*x/x = b/x + c --> infinity for x --> 0 regardless of b or c. What am I missing here? Edit: Imagine there is a lim_x-->0(..) on each line (and "= infinity" on the last) to obtain a more correct way of writing this down...
I'm going to take a wild stab at this and say a + bx + cx2 corresponds to the first three terms of the Taylor series of ecos(x). I'll leave it to you to show that the limit approaches zero, especially since I have no clue how rigorous your analysis course is. (But the real reason is that it's been forever since I've done this kind of stuff...)
Player (172)
Joined: 12/28/2007
Posts: 235
Location: Japan, Sapporo
RGamma wrote:
As x --> 0, f(x) --> e (because cos x --> 1). We get (g is > 0 and continuous on (0,infinity) so we can replace f(x) with e): 1/x^2 * abs(e - (a +b*x + c*x^2))
You missed something here. For instance, suppose that f(x):=x and g(x):=(f(x)-0)/x, and consider their limits at x --> 0. You see that you can't replace f(x) by 0 though f(x) --> 0, since g(x)=1 whenever x =/= 0.
Retired because of that deletion event. Projects (WIP RIP): VIP3 all-exits "almost capeless yoshiless", VIP2 all-exits, TSRP2 "normal run"
Joined: 2/3/2013
Posts: 320
Location: Germany
Bobo the King wrote:
RGamma wrote:
...
I'm going to take a wild stab at this and say a + bx + cx2 corresponds to the first three terms of the Taylor series of ecos(x). I'll leave it to you to show that the limit approaches zero, especially since I have no clue how rigorous your analysis course is. (But the real reason is that it's been forever since I've done this kind of stuff...)
You were correct. A quick visual check with first three Taylor coefficients at x_0=0 (a:=e, b:=0, c:=-e/2). I'm honest to say I still don't really see the connection there or how you could see it at once. Tomorrow I'll be thinking more about this... too tired now. Thanks for the hint, I doubt I would have gotten a correct solution at all.
Mister wrote:
RGamma wrote:
As x --> 0, f(x) --> e (because cos x --> 1). We get (g is > 0 and continuous on (0,infinity) so we can replace f(x) with e): 1/x^2 * abs(e - (a +b*x + c*x^2))
You missed something here. For instance, suppose that f(x):=x and g(x):=(f(x)-0)/x, and consider their limits at x --> 0. You see that you can't replace f(x) by 0 though f(x) --> 0, since g(x)=1 whenever x =/= 0.
Oh and I somehow mindlessly applied some of these rules without this even being correct here as my failed attempt has shown...
All syllogisms have three parts, therefore this is not a syllogism.
Player (79)
Joined: 8/5/2007
Posts: 865
RGamma wrote:
Bobo the King wrote:
RGamma wrote:
...
I'm going to take a wild stab at this and say a + bx + cx2 corresponds to the first three terms of the Taylor series of ecos(x). I'll leave it to you to show that the limit approaches zero, especially since I have no clue how rigorous your analysis course is. (But the real reason is that it's been forever since I've done this kind of stuff...)
You were correct. A quick visual check with first three Taylor coefficients at x_0=0 (a:=e, b:=0, c:=-e/2). I'm honest to say I still don't really see the connection there or how you could see it at once. Tomorrow I'll be thinking more about this... too tired now. Thanks for the hint, I doubt I would have gotten a correct solution at all.
I obtained the same coefficients when I sketched out my answer. As for how I did it so quickly... well, I'm not entirely sure myself. It's just that you do enough problems of this sort and you start to "read" them and see what kinds of points they're making. I think that's true of any subject. I still can't "read" a whole lot of mathematical proofs, nor can I "read" computer code. But Taylor series are incredibly practical, so I picked this one right up. You have some function f(x). There is nothing to immediately suggest you should take the Taylor series of it, although when you do realize you should do a Taylor expansion, it helps to recognize that ecos(x) is analytic. (I.e., it is infinitely differentiable everywhere. Functions that only involve exponentiation, sines, and cosines tend to be analytic automatically and it is often unnecessary to rigorously prove that their Taylor series converge everywhere.) From there, you provided a function g(x). The 1/x2 term out front is "bad" because it tends to infinity as x goes to 0. Let's disregard it for a moment and concentrate on making |f(x) - (a + bx + cx2)| go to zero. As you found out in your own attempt, it isn't enough to make the function go to zero at a given point (the 1/x2 screws everything up), so we really want to make it "as zero as possible". In other words, make it as close to zero as possible in the interval immediately surrounding x=0. Well, now we essentially have the condition that f(x) ~ a + bx + cx2 for x approximately equal to zero, which is practically the definition of a Taylor series. So we give the Taylor series a shot. Can we intuit that g(x) goes to zero as required? Yes. The (infinite) Taylor series of f(x) and a + bx + cx2 match up through the first three terms, so we are left with a polynomial that starts with the term x3. (In fact, the actual polynomial has coefficient 0 to the x3 term and it really starts with an x4 term.) That means that when we divide by x2 (the absolute value operation doesn't change things much), we are effectively left with a polynomial whose first term is no lower than x, meaning there is no constant term in this polynomial. That means g(x) goes to zero as x approaches zero. Now, rigorously proving all this is another matter, and I'm forced to leave it to you for the reasons I gave above: I don't know how rigorous your course is and even if I did, I haven't done this stuff recently enough to be of all that much help. If your instructor is evil, they will make you do a proof involving epsilons and deltas (although if memory serves me, even that shouldn't be so bad for a function like this). If you've proved some theorems in class and are allowed to take "shortcuts", something like Taylor's theorem will be much help. Regardless, you should look at these problems as an exercise in building your intuition, not in memorizing a step-by-step series of instructions. As you can see, it is much more helpful to have a broad view of what is going on and to be able to "guess" a strategy rather than to embark on a direct proof from a completely naive perspective.
Joined: 2/3/2013
Posts: 320
Location: Germany
Bobo the King wrote:
RGamma wrote:
Bobo the King wrote:
RGamma wrote:
...
I'm going to take a wild stab at this and say a + bx + cx2 corresponds to the first three terms of the Taylor series of ecos(x). I'll leave it to you to show that the limit approaches zero, especially since I have no clue how rigorous your analysis course is. (But the real reason is that it's been forever since I've done this kind of stuff...)
You were correct. A quick visual check with first three Taylor coefficients at x_0=0 (a:=e, b:=0, c:=-e/2). I'm honest to say I still don't really see the connection there or how you could see it at once. Tomorrow I'll be thinking more about this... too tired now. Thanks for the hint, I doubt I would have gotten a correct solution at all.
I obtained the same coefficients when I sketched out my answer. As for how I did it so quickly... well, I'm not entirely sure myself. It's just that you do enough problems of this sort and you start to "read" them and see what kinds of points they're making. I think that's true of any subject. I still can't "read" a whole lot of mathematical proofs, nor can I "read" computer code. But Taylor series are incredibly practical, so I picked this one right up. You have some function f(x). There is nothing to immediately suggest you should take the Taylor series of it, although when you do realize you should do a Taylor expansion, it helps to recognize that ecos(x) is analytic. (I.e., it is infinitely differentiable everywhere. Functions that only involve exponentiation, sines, and cosines tend to be analytic automatically and it is often unnecessary to rigorously prove that their Taylor series converge everywhere.) From there, you provided a function g(x). The 1/x2 term out front is "bad" because it tends to infinity as x goes to 0. Let's disregard it for a moment and concentrate on making |f(x) - (a + bx + cx2)| go to zero. As you found out in your own attempt, it isn't enough to make the function go to zero at a given point (the 1/x2 screws everything up), so we really want to make it "as zero as possible". In other words, make it as close to zero as possible in the interval immediately surrounding x=0. Well, now we essentially have the condition that f(x) ~ a + bx + cx2 for x approximately equal to zero, which is practically the definition of a Taylor series. So we give the Taylor series a shot. Can we intuit that g(x) goes to zero as required? Yes. The (infinite) Taylor series of f(x) and a + bx + cx2 match up through the first three terms, so we are left with a polynomial that starts with the term x3. (In fact, the actual polynomial has coefficient 0 to the x3 term and it really starts with an x4 term.) That means that when we divide by x2 (the absolute value operation doesn't change things much), we are effectively left with a polynomial whose first term is no lower than x, meaning there is no constant term in this polynomial. That means g(x) goes to zero as x approaches zero. Now, rigorously proving all this is another matter, and I'm forced to leave it to you for the reasons I gave above: I don't know how rigorous your course is and even if I did, I haven't done this stuff recently enough to be of all that much help. If your instructor is evil, they will make you do a proof involving epsilons and deltas (although if memory serves me, even that shouldn't be so bad for a function like this). If you've proved some theorems in class and are allowed to take "shortcuts", something like Taylor's theorem will be much help. Regardless, you should look at these problems as an exercise in building your intuition, not in memorizing a step-by-step series of instructions. As you can see, it is much more helpful to have a broad view of what is going on and to be able to "guess" a strategy rather than to embark on a direct proof from a completely naive perspective.
Now that's a great explanation! You don't find these very often in forums or threads dealing with these rather ordinary tasks. I could not formally prove the convergence in the end (yet) and moved on because the exam is coming closer rapidly. Thanks for the help. P.S.: I won't post my maths exercises again (if they're not particularly ingenious), don't worry.
All syllogisms have three parts, therefore this is not a syllogism.
Player (124)
Joined: 3/23/2012
Posts: 296
Location: In X position=50 y position=20
prove -12 = 12
Jungon wrote:
if I was to have a Tool-Assisted real life ... I'd.. I could abuse death, just to see if it saves time ..
Player (79)
Joined: 8/5/2007
Posts: 865
Marx wrote:
prove -12 = 12
No.
Banned User, Former player
Joined: 3/10/2004
Posts: 7698
Location: Finland
Marx wrote:
prove -12 = 12
Write something implicitly having a division by zero, fractional powers of negative numbers, or possibly infinities.
NitroGenesis
He/Him
Editor, Experienced player (551)
Joined: 12/24/2009
Posts: 1873
Marx wrote:
prove -12 = 12
-12 + 12 = 0 0 + 12 = 12 Proofed #hashtag mathwizardking2014
YoungJ1997lol wrote:
Normally i would say Yes, but thennI thought "its not the same hack" so ill stick with meh.
Editor
Joined: 11/3/2013
Posts: 506
Marx wrote:
prove -12 = 12
Sorry, but April Fool's Day is tomorrow. (Quite looking forward to it actually, will be the first for me.)
Player (124)
Joined: 3/23/2012
Posts: 296
Location: In X position=50 y position=20
square both sides (-25)^2=25^2 625=625
Jungon wrote:
if I was to have a Tool-Assisted real life ... I'd.. I could abuse death, just to see if it saves time ..
Banned User, Former player
Joined: 3/10/2004
Posts: 7698
Location: Finland
Marx wrote:
square both sides (-25)^2=25^2
You can't simply "square both sides". That's not a valid operation (that keeps the equality). You have to multiply both sides by the same number, and you did not do that there.
HHS
Active player (282)
Joined: 10/8/2006
Posts: 356
Marx wrote:
prove -12 = 12
Let's assume that if I am right, then -12 equals 12. If I am indeed right, it then follows that -12 equals 12, just like I said it would - therefore I am right, and -12 does equal 12.
Editor, Skilled player (1404)
Joined: 3/31/2010
Posts: 2086
Some insightful words by our friend George Orwell on this matter. [...] He picked up the children's history book and looked at the portrait of Big Brother which formed its frontispiece. The hypnotic eyes gazed into his own. It was as though some huge force were pressing down upon you -- something that penetrated inside your skull, battering against your brain, frightening you out of your beliefs, persuading you, almost, to deny the evidence of your senses. In the end the Party would announce that twelve was equal to minus twelve, and you would have to believe it. It was inevitable that they should make that claim sooner or later: the logic of their position demanded it. Not merely the validity of experience, but the very existence of external reality, was tacitly denied by their philosophy. The heresy of heresies was common sense. And what was terrifying was not that they would kill you for thinking otherwise, but that they might be right. For, after all, how do we know that twelve is not equal to minus twelve? Or that the force of gravity works? Or that the past is unchangeable? If both the past and the external world exist only in the mind, and if the mind itself is controllable what then? [...]