I'm not sure it's such a novel idea anymore... And the medium itself limits the possibilities because the outlines are already given and all you can do is use colors.
Getting the highest ratings with the minimal amount of work (which therefore means that the drawing looks nothing like the original) is amusing. On the other hand, I wonder if there would be value in a TAS that replicates the original image 100% as fast as possible.
Home carbonating devices consist basically of a bottle of compressed CO2 that shoots the gas into the water. The water gets carbonated.
When doing so, especially if the water is cold, ice tends to form for a short period of time (before it quickly melts.)
I think that's the relevant part. Many haters have this misconception that making TASes is easy. Just use an emulator and its savestates and slowdown, and the rest is just grinding over the game until there are no errors.
If these people think that it's so easy, they are welcome to try to get even close to the completion time of TASes like the ones for Megaman, OoT or Pokemon Yellow. On their own, without looking at the painstaking work that others have already done.
So it has become a pissing contest now? Who has the more experience and who knows more about the language?
I'm sorry, but if you say things like C++11 just makes the language needlessly complicated, or you are having naming clashes with standard library functions like 'fill' or 'distance', then it doesn't really matter what you do as your job and how long you have been doing it. You don't know what you are talking about.
No, I don't consider C++ to be "a masterpiece of design." What irks me is when people start inventing flaws that it doesn't have (eg. templates being slower than explicit code), or exaggerate problems that are completely inconsequential in 99.99% of cases (such as compiling times being longer in certain situations.)
If you had stated actual problems C++ has compared to other languages, accurately and without exaggeration, then I wouldn't have had any complaint.
For example, there are some things that one just has to know and have experience about if one wants to achieve and surpass the speed of other, higher-level languages. For instance, dynamically allocating a million individual objects is much slower in C++ than it is eg. in C# or Java. That's because C++ is stuck with the standard clib memory allocator, which is horribly slow. (There are ways to make allocation much faster in C++, but they require specialized allocators.) This means that if one wants to write efficient code, one needs to avoid doing excessive amounts of individual memory allocations as much as possible. (In other words, prefer allocating entire arrays of objects as single allocations rather than allocating each individual object separately.)
C++ streams tend to be significantly slower than C I/O functions, which means that if maximum I/O speed is required, one has to resort to using the latter. (The advantage of C++ streams is that they are more type safe and in some situations much easier to use and more flexible.) Of course this doesn't mean that a C++ program necessarily is slower in terms of I/O than a C program; it just means that you need to be aware of this when programming, and you need to learn the C functions. This can be an extra burden when learning the language.
Of course compared to languages like C# and Java, memory handling requires more experience and care. Experienced C++ programmers have developed a natural instinct to avoid such problems in the vast majority of code, but this is something that requires a lot of learning, and therefore beginners will struggle with it for a long time. In higher-level languages one seldom needs to worry at all about memory handling issues. (On the other hand, C++-style memory handling also has its advantages in some situations, such as deterministic destruction of objects and handling objects by value, which brings up the possibility of automatically managing things other than memory.)
Yes, because that's the major issue for a beginner programmer.
I stand corrected. That's most certainly the major issue. Especially for the 100-line programs that a beginner programmer is going to make.
Your term "C pointers" is a misnomer. They are commonly called "raw pointers". And there's nothing that forces you to use smart pointers in C++. You make it sound like there is. (This is highly deceptive for someone who doesn't know the languages yet. They get the impression that in C++ you are forced to use smart pointers.)
Have you done actual measurements that RTTI reduces performance, or is it solely based on assumptions?
(I actually have. Eg. the speed difference between calling a virtual function and a normal function is practically immeasurable. The few clock cycles more that it requires gets swamped by everything else that's done in order to call a function.)
Then you are needlessly restricting yourself.
What differences are those?
Make a guess how many times that has been a problem for me during the 15+ years I have actively programmed in C++.
Guess how many times.
You can thank C for that. However, who exactly finds that confusing?
Which is why namespaces exist.
No, what's stupid is the irrational hatred of namespace prefixes. It makes no sense.
Namespace prefixes not only disambiguate the code, they also make it easier to read. And yes, I can present an argument for that. It's not a spurious claim.
I think you are now pulling things from your behind. Try to guess exactly how many times any of that has been a problem during the 15+ years I have been programming in C++.
Guess how many times.
Guess what you are full of?
The general takeaway here is that the more you design your system to be flexible and readily-extensible (e.g. by using higher-level languages, by using templates or other type-agnostic systems; by using an automatic garbage collector; etc.), the more you pay in performance.
It's a mistake to put all "higher-level" features into the same "reduces performance" category. You should get your facts straight before carelessly listing things that don't belong.
One of the main design principles behind the C++ template feature is that they don't make the program more inefficient than the explicit alternative. Basically, templates are evaluated at compile time, rather than at runtime, which means that the compiler has all it needs to make the program as efficient as if templates hadn't been used (and instead the code had been written explicitly for a certain type.)
On the contrary, templates sometimes produce faster code than an equivalent generic implementation in C. You just have to compare std::sort() to qsort(). (The reason that qsort() is slower than std::sort() is because with the former the compiler cannot perform type-specific optimizations, and there are additional indirections that std::sort() doesn't need.)
The problem is that you still have to take care of everything yourself, although now you're typically subcontracting the details to various libraries rather than doing it directly; and because C++ gives you so many options, it means that you have to learn all the options to have much of a chance of following someone else's code. I learned C++ back before, say, smart pointers were invented (or at least, had found their way to the textbooks and compilers I used), meaning that I have trouble following anyone else's code that uses them, and they'd likely have similar problems following my old code.
In general, I prefer languages that are good at one thing, and appropriate for that sort of problem. As such, I feel C is a better language; it's good at something that doesn't come up very often, but it's usually clearly the right language when it does. Languages that try to be good at everything too often end up doing nothing well…
ais523 wrote:
C# and C++ are likewise noticeably slower than C if you use their standard libraries to their full potential, rather than trying to avoid them as much as possible; convenience tends to come with a performance penalty.
This is very typical anti-C++ FUD spouted by C programmers, and it's completely false, and I would pay zero attention to it.
I especially find that last part quite ironic, given that the typical generic data containers written in C that I have seen are measurably less efficient than the equivalent data container in the standard C++ library (usually because the generic C implementation performs more memory allocations, which are a big bottleneck. The C implementation also often consumes more memory.)
In most cases the claims are made without any kind of actual measurement results, based solely on assumptions. In a few cases some claims might be based on measurements, but it often turns out that the wrong C++ container was being used for the job, or it was used in a suboptimal manner (possibly on purpose.)
I have programmed in C++ for over 15 years, almost 10 years professionally, and I use the standard library containers and other tools all the time. They make programming several orders of magnitude easier and safer, and I can assure you that they are no less efficient than any equivalent in C. On the contrary, in some cases they can be more efficient.
Even if the C++ containers were less efficient, they would still be very useful because you can write one-liners in C++ that require dozens if not hundreds of lines in C. That alone would be reason enough. Luckily, they are not less efficient.
(There is one exception: C++ streams are measurably slower than the C I/O functions. This is a fact. If maximal file handling speed is required in a program, one has to resort to the C-style I/O functions. They are part of the C++ standard as well, so they are available if needed. However, while this exception is a real case where the C standard library beats the C++ one, it tends to be the only case.)
So, while I slept last night, I had a dream. In the dream, I was at a mall (shopping malls have been a recurring setting in my dreams lately for some reason). Above one of the entrances was a long banner. The banner had a closeup of the upper portion of Rarity's stock pose. She was telekinetically holding a book in front of her face. So either it was an ad for a bookstore, or simpy a banner advocating the joys of reading. Above her was written the phrase...
"I LIKE TO READ ABOUT BOYS WHO BUCK"
So... yeah. Last night my subconcious used ponies to conjure up a rude pun. I need help.
Further optimizations can be applied, you can initialize all even numbers except 2 as composite and , start at 3 and use n+=2 instead of ++n, and that can be made even better using something called a factorization wheel, but that's more complicated.
If we are talking about programmatic optimizations, then one of the major problems with the basic algorithm is that it has poor cache locality if the bit array is much larger than even the outermost cache, and the algorithm can be made significantly faster by doing things in a slightly different order. However, that topic is beyond the scope of this thread, which is about math.
The sieve or Eratosthenes is a relatively simple algorithm for finding all the prime numbers smaller than a given value. Basically it works by starting from 2 and marking all of its multiples as composite, then advancing to the next unmarked number and marking all of its multiples as composite and so on.
A small optimization can be performed in that when advancing to the next unmarked number, you can stop at the square root of the maximum value. The reason for this is relatively easy to understand intuitively: By necessity, every single composite number within the range will have at least one prime factor that's smaller or equal to the square root of the maximum value. Therefore once you get to this value, all composites will necessarily have been marked so there's no need to continue from there.
A less intuitive optimization is that when marking the multiples of the current value, you don't have to start from its first multiple. Instead, you can start from its square. The algorithm can be written in code eg. like this:
Language: cpp
prime.set();
for(unsigned long n = 2; n*n < kMaxValue; ++n)
if(prime[n])
for(unsigned long index = n*n; index < kMaxValue; index += n)
prime[index] = false;
(Note how the outer loop ends at the square root of kMaxValue, and the inner loop starts at the square of the current prime we are marking all multiples of.)
So my question is: Why can you start marking multiples from the square of the current value?
After learning about countability, whenever you encounter a new infinite set, your newly trained intuition should kick in and ask: "So how would I number these elements without missing any?". If you can't come up with a numbering off the top of your head, it's probably uncountable.
It's just that without experience it can be hard to come up with ways to enumerate an infinite set of values. This often even when there's a trivial (in hindsight) way.
For example, how to enumerate all rational numbers? It's surprisingly difficult to come up with a way. Yet, there's a very simple one: Put all rational numbers in order in an infinite two-dimensional grid, like this:
1/1 1/2 1/3 1/4 1/5 ...
2/1 2/2 2/3 2/4 2/5 ...
3/1 3/2 3/3 3/4 3/5 ...
...
and then simply enumerate them diagonally (ie. 1->1/1, 2->1/2, 3->2/1, 4->1/3, 5->2/2, 6->3/1, and so on.)
(This way also helps understanding why any set consisting of values formed from n integers each is likewise enumerable.)