Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
@Tub: In a real contest, there's no bonus for faster codes, so we usually don't care about optimizing it to death. The bonus comes from the time of your submission, meaning that someone who submits an accepted code in less time has the advantage. Each WA, TLE (time limit exceeded) or other rejected codes take a 20-minute penalty. Aside from that, the time limit and the greatest value for input are usually set to allow algorithms with sub-optimal implementation of the right complexity to pass and reject others with higher complexity, even if coded well. This problem was an exception, the limits were unexpectedly generous. It's also interesting to see that two solutions completely different theoretically end up doing the same thing. The LIS approach is very interesting. For the special case d=2, this problem can be solved in O(n log n), you might want to check it out. @Warp: I prefer it the other way because it gives me a better idea of the time it'll take. For example, when d is a lot larger than n, the part which will cost more is O (n d log d). It may be notation abuse, but... If anyone else is interested in problems like that, there's a lot more at UVa, I just passed 104: Arbitrage, also a very interesting problem.
Banned User
Joined: 6/18/2010
Posts: 183
I was recently asked an interesting programming related question that boils down to, "Can you write a wrapper for a general function that automatically memoizes recursive function calls?" As a specific example, suppose you have the recursive function for calculating Fibonacci numbers: Fib(n) { Return Fib(n-1) + Fib(n-2); } A clever programmer intending on calculating lots of Fibonacci numbers might write a program that stores previously calculated results in a hash table so that each call to Fib need only be calculated once per n. But suppose Fib is an already written function whose code you can't change. Can you write a wrapper function that stores and retrieves previous calculations as necessary? It would be trivial, for example, to ensure that if you've calculated Wrapper(Fib(n)), you never need to do so again. But if you try to calculate Wrapper(Fib(n+1)), Fib will make a recursive call to Fib(n), not Wrapper(Fib(n)). Is there a way to address that? Anyway, when asked this question I said, "I don't know, I suppose it must depend on the programming language. I don't know of a way of doing it, though." Then again, I don't actually know very much about programming, so I'm not the best person to ask these sorts of questions to. Regardless, I thought it was pretty interesting and thought you guys might like to hear it.
Joined: 7/2/2007
Posts: 3960
To my knowledge, you can't do that kind of thing without interfering with either the definition of the function you're wrapping, or with how the language invokes functions -- either way, you're looking at a huge hack if you want to do this at runtime. Python has decorators which can do what you want fairly cleanly, but they require modifying the wrapped function to indicate that it is wrapped. They'd look something like this (untested):
# Maps functions to arguments to results of calling functions with those arguments
previousResults = {}

def memoize(function):
    # Establish an entry in our memo map for this function.
    global previousResults
    if function not in previousResults:
        previousResults[function] = {}

    def wrappedFunc(*args):
        # Check if we've tried this before.
        key = tuple(args)
        if key in previousResults[function]:
            return previousResults[function][key]
        value = function(*args)
        previousResults[function][key] = value
        return value

    return wrappedFunc

@memoize
def fib(n):
    if n == 0:
        return 0
    if n <= 2:
        return 1
    return fib(n - 1) + fib(n - 2)
Thus, you can't at runtime say "Hang on, I want to decorate this function so it's memoized"; however, you can easily memoize any function by making a one-line change to it. EDIT: more generally, you can do this kind of thing in any language that supports functions as first-class objects. Putting "@memoize" before the definition of fib() is just syntactic sugar for saying "def tmp(n): ...; fib = memoize(tmp)".
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Joined: 10/20/2006
Posts: 1248
If you can't modify the function, then you would probably have to find some way to automatically replace it, so that in the recursion part it will call your new function that checks for the table results first instead, and only then calls the original function. It should work in C/C++ at least. I'd also say it depends on the language, and also don't know a lot about programming.
Joined: 7/2/2007
Posts: 3960
That's what I meant by "a huge hack" for runtime work. In Python you'd probably do that by importing the module the function is in and then assigning to the module's namespace -- move the original function to an unused name and put your memoized version where it used to be. For a compiled language this gets trickier; in C/C++ you'd have to modify the call table, and I don't know how you'd go about doing that. Assuredly it's possible, but you'd need to know exactly what you're doing to avoid trampling over important parts of RAM and causing a segfault or bus error.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Doing this kind of wrapping seems very complicated. I tried to do something like that some time ago, to find out how many times std::sort() in C++ calls itself, but couldn't get anything. I imagine it's possible if you were to watch the execution stack and replace the original function with the wrapped one when it gets on top of the stack, though I have no idea how to do that. This is a bit out of subject, but memoization may not be required to efficiently evaluate Fibonacci numbers (and any other linear recurrence for that matter). Though it is still the best option if you want to find all of them, if you want only one, you can do it in logarithmic complexity using matrix exponentiation:
Language: C++

#include <iostream> using namespace std; struct mat{ unsigned long long m[2][2]; }; mat ident,fibMat; mat operator* (mat a, mat b){ mat c; c.m[0][0]=a.m[0][0]*b.m[0][0]+a.m[0][1]*b.m[1][0]; c.m[0][1]=a.m[0][0]*b.m[0][1]+a.m[0][1]*b.m[1][1]; c.m[1][0]=a.m[1][0]*b.m[0][0]+a.m[1][1]*b.m[1][0]; c.m[1][1]=a.m[1][0]*b.m[0][1]+a.m[1][1]*b.m[1][1]; return c; } mat pow (mat a, int n){ if (n==0) return ident; if (n==1) return a; if (n==2) return a*a; return pow(pow(a,n/2),2)*pow(a,n%2); } unsigned long long fib (int n){ mat p; if (n==0) return 0; p=pow(fibMat,n-1); return p.m[0][0]; } int main(){ int n; ident.m[0][0]=ident.m[1][1]=1; ident.m[1][0]=ident.m[0][1]=0; fibMat.m[0][0]=fibMat.m[0][1]=fibMat.m[1][0]=1; fibMat.m[1][1]=0; cin>>n; cout<<fib(n); return 0; }
This can answer fast even ridiculously big stuff, like fib(1010000), though it will obviously overflow. Another option is to diagonalize the matrix, which is almost the same as using the formula for the general term, but it'll deal with some precision errors.
Joined: 7/2/2007
Posts: 3960
Of course, there's also the formula which allows you to calculate any Fibonacci number very quickly by abusing the series' relationship with the golden ratio. No recursion needed; just N*3 multiplications and some basic arithmetic (and the square root of 5, but that can be precalculated). I suppose you might start to run into trouble with floating point precision after awhile though.
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Derakon wrote:
Of course, there's also the formula which allows you to calculate any Fibonacci number very quickly by abusing the series' relationship with the golden ratio. No recursion needed; just N*3 multiplications and some basic arithmetic (and the square root of 5, but that can be precalculated). I suppose you might start to run into trouble with floating point precision after awhile though.
I think that the point of the task was not to calculate the Fibonacci number as fast as possible, but to take any given recursive function and make a generic solution that would make it more efficient. Fibonacci was just an example. Naturally if you wanted to calculate the Fibonacci number faster, you would just use an iterative loop rather than doing it recursively. (The problem with the naive recursive solution is that it's converting an essentially O(n) solution into an O(n2) solution by needlessly calculating the same values many times.)
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Actually, the naive recursive solution transforms the problem in O(2n). The similar iterative/dynamic programming approach does it in O(n), but it can be solved in O(log n), like my code does. Also, Derakon, you don't need to do N multiplications to exponentiate a number. For example, a27 = a16 * a8 * a2 * a. And a16 can be calculated by (((a2)2)2)2. The others are found at each iteration, so you'll need only O(log n) operations. Using the formula is just the same as exponentiating the diagonalized matrix, but since it will no longer be integral, there'll be some annoying precision errors. Enough talking about Fibonacci. I guess I understood the problem wrong, at first, I thought it was about manipulating a function to which the programmer has no access. If it's about general solutions to speed up recursive algorithms, I think it depends on the problem. There are many ways to improve recursions, ranging from linear algebra, dynamic programming, game theory and even number theory, so it's pretty hard to come up with a generic approach.
HHS
Active player (286)
Joined: 10/8/2006
Posts: 356
It's simple to compute Fibonacci and Lucas numbers using only integer math, by taking advantage of the following facts: L(A+B)=(L(A)L(B)+5F(A)F(B))/2 F(A+B)=(L(A)F(B)+L(B)F(A))/2 L(2A)=(L(A)^2+5F(A)^2)/2 F(2A)=L(A)F(A) I have an old program which computes the Nth Fibonacci number and the Nth Lucas number at once using the above relations, for N up to 46999, which takes about 60ms to compute.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
Well, lately, I've been trying to solve this programming problem. Yeah, it's in Portuguese, but you can ignore the description. It'll give you a set of (possibly nonbipartite) graphs in the following way, first, the number of graphs, then for each graph, the number of vertices followed by one line for each vertex, containing its degree and the vertices it's linked to. You should output "pair programming" if a perfect matching exists in the graph and "extreme programming" if it doesn't. This is an old problem, from way back when they had the annoying habit of creating difficulty by requiring the use of "obscure" algorithms (and seeing from how often they have to rejudge these kind of problems, possibly even the examiners didn't know what they were doing, in fact, most people who passed this one used a transformation to a bipartite graph that doesn't always work, but the judge failed to eliminate). Anyway, AFAIK the only way to determine if a perfect matching exists is to actually compute the maximum matching and see if it covers all nodes. The most common algorithm for this is due to Edmonds, despite being a pain to implement, it's still the easiest one. However, it's O(v4), and the n<=100 restriction in the problem will only allow O(v3) or faster to pass. Well, maybe O(v4) could pass if one were to minimize function calls, instructions and use static variables, but since the algorithm is complicated, the constants hidden by the O notation are probably large. Knowing this, I searched the internet for the faster versions of the algorithm, but always ended up in a page I needed to pay to have access to the paper. The only thing I got was a Fortran code implementing the O(sqrt(v)*e) version, which had over 1500 lines of code. This is a bit too much. I know that Edmonds's algorithm can be implemented in O(v3), and will be satisfied if I manage to do that. Still, the problem is that I can't find any clues about this implementation in the web. I'm 99% sure that the improvement comes from optimizing blossom contraction and path lifting. In fact, I'm close to doing that, by using the original graph all the time, using auxiliary arrays to make the algorithm "think" the blossom was contracted and work. I might be able to do it if I find a way to deal with blossom within a blossom problems, but the method I'm thinking may be over-complicated. Anyway, I'm asking if anyone knows where I can find this algorithm, I know I'd have better luck asking at the problem site's forum, but not many people go there.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
That reminds me of the time when I had to make programs for manipulating graphs as my payjob. I learned quite a lot about graphs back then (and have forgotten most of it since). For example, checking if two graphs are isomorphic is an NP problem. (Two graphs are isomorphic if you can rearrange one of them so that it becomes identical to the other. That might at first sound like a trivial problem, but it's, in fact, an extremely hard one.) Curiously, determining if two graphs are strongly bisimilar is a polynomial-time problem, even though the problem sounds very similar to the isomorphism problem. (Two graphs with named arcs are strongly bisimilar if all the named paths in one graph exist in the other, and vice-versa.) Strong bisimilarity happens to be a much more useful comparison of graphs than isomorphism, so it's lucky that it's an easy problem.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I managed to find the O(v3) algorithm in a combinatorial optimization book I never knew I had, lol. It's much more elegant than the one I was thinking. It manages to create the entire forest, blossoms, check for distances and lift the path only by applying labels to the nodes. The length of the code is still big, though... After around 250 lines, I got Accepted, it seems I smashed the time limit. Their test cases must not have been as bad as I thought, maybe the idea was to really use the O(v4) version xD. I tried to make a commented version if anyone wants to see. I don't want to post accepted code, so I changed the way the graph is read and modified it to output the maximum matching. Most of it is still copypasted from the other one, so there are still some ugly things like code duplication and global variables.
Warp wrote:
That reminds me of the time when I had to make programs for manipulating graphs as my payjob. I learned quite a lot about graphs back then (and have forgotten most of it since). For example, checking if two graphs are isomorphic is an NP problem. (Two graphs are isomorphic if you can rearrange one of them so that it becomes identical to the other. That might at first sound like a trivial problem, but it's, in fact, an extremely hard one.) Curiously, determining if two graphs are strongly bisimilar is a polynomial-time problem, even though the problem sounds very similar to the isomorphism problem. (Two graphs with named arcs are strongly bisimilar if all the named paths in one graph exist in the other, and vice-versa.) Strong bisimilarity happens to be a much more useful comparison of graphs than isomorphism, so it's lucky that it's an easy problem.
I remember that when I first heard of Travelling Salesman, I thought it would be an easy problem, since FW could find all pairs shortest path quite efficiently, shortly after I came to know it's NP-hard... I also had trouble understanding why 2-SAT is solved in polynomial time and 3-SAT (which is awfully similar) is NP-complete.
arflech
He/Him
Joined: 5/3/2008
Posts: 1120
HHS wrote:
It's simple to compute Fibonacci and Lucas numbers using only integer math, by taking advantage of the following facts: L(A+B)=(L(A)L(B)+5F(A)F(B))/2 F(A+B)=(L(A)F(B)+L(B)F(A))/2 L(2A)=(L(A)^2+5F(A)^2)/2 F(2A)=L(A)F(A) I have an old program which computes the Nth Fibonacci number and the Nth Lucas number at once using the above relations, for N up to 46999, which takes about 60ms to compute.
I have been inspired to make a program in Javascript that does this for any number, by first finding its base-2 representation; here's the basic idea: Set up arrays F[] and L[], remembering that F[1]=1 and L[1]=1, and use the special-case logic that F(0)=0, L(0)=2, F(-N)=(-1)^(N+1)*F(N), and L(-N)=(-1)^N*L(N). Set up array D[] as the set of powers of 2 to use, something like this...
for(i=0,j=N;j>0;i++){
 D[i]=j%2;
 j=(j-D[i])/2;
}
Then populate F[] and L[] with values for the powers of 2 up to 2^(D.length()-1) and use the sum formulas to calculate F[N] and L[N]. I'll link to it when I'm done, but in the meantime check out a demonstration of the Huntington-Hill algorithm for U.S. Congressional apportionment: http://jansal.net/HuntingtonHill.shtml
i imgur com/QiCaaH8 png
Joined: 7/2/2007
Posts: 3960
This is a bit for fun and a bit for serious. Jetblade, the open-source game I've been working on off and on, has a spacefilling automaton that I use to carve tunnels into the game world. Here's an example animation showing how the algorithm works: Basically a selection of "seeds" are laid down along the various lines that describe where the game wants to place tunnels. These seeds have two attributes: an owner and a lifespan. Each tick, they attempt to replace all neighboring cells with a copy of themselves with shorter life, and make an open space. If there's already a seed adjacent to them with a different owner, however, then they create a wall instead. There's a problem, though: this is slow. For a (fairly small) map of 250x250 cells, seed expansion takes 7 seconds, and this scales approximately linearly with the number of cells (35 seconds at 500x500). So here's the function; do you see any notable improvements that could be made?
Language: python

## Run a spacefilling automaton to divide a grid up into different sectors. # \param seeds A mapping of locations to Seed instances # \param blocks A grid of cells to be converted to walls and open space # Each seed ordinarily replaces all neighbor spaces with copies of itself # that have 1 less turn to live. When a seed dies (has 0 life), its location # is marked as open in the blocks grid. Dead seeds are retained so we can # map spaces to the sectors that own those spaces. # When two expanding seeds collide, they merge if they are for the same # sector (as determined by their 'owner' property), or form a wall between # them if they are not. def expandSeeds(self, seeds, blocks): deadSeeds = dict() numCols = len(blocks) numRows = len(blocks[0]) logger.debug("Expanding seeds for a",numCols,"by",numRows,"grid") while len(seeds): newSeeds = dict() for loc, curSeed in seeds.iteritems(): # If the counter expires while the seed is not in open space, # replace it with a wall. if (not self.getIsInBounds(loc, numCols, numRows) or (curSeed.life <= 0 and blocks[loc.ix][loc.iy] != BLOCK_EMPTY)): deadSeeds[loc] = curSeed blocks[loc.ix][loc.iy] = BLOCK_WALL continue if (blocks[loc.ix][loc.iy] == BLOCK_WALL): # A wall sprung up under us. Check if there's a dead # seed there; if so, try to merge with it. if loc in deadSeeds: newSeed = self.tryMergeDeadSeed(curSeed, loc, seeds, deadSeeds, numCols, numRows) if newSeed is not None: newSeeds[loc] = newSeed del deadSeeds[loc] blocks[loc.ix][loc.iy] = BLOCK_UNALLOCATED else: # No seeds on walls. continue if blocks[loc.ix][loc.iy] == BLOCK_EMPTY: # No seeds on empty space deadSeeds[loc] = curSeed continue for adjLoc in loc.perimeter(): # Check adjacent blocks for seeds. If none, expand into the # space. If there is a seed, make a wall, unless it's our # own seed in which case we merge with it. if not self.getIsInBounds(adjLoc, numCols, numRows): continue if adjLoc in seeds or adjLoc in newSeeds: # Two adjacent seeds; either merge or make wall. altSeed = None if adjLoc in seeds: altSeed = seeds[adjLoc] else: altSeed = newSeeds[adjLoc] if altSeed.owner == curSeed.owner: # Merge the seeds newSeeds[adjLoc] = seed.Seed(curSeed.owner, max(curSeed.life, altSeed.life) - 1, max(curSeed.age, altSeed.age) + 1) else: # Conflict; make a wall. altSeed.life = 0 altSeed.age = constants.BIGNUM curSeed.life = 0 curSeed.age = constants.BIGNUM blocks[loc.ix][loc.iy] = BLOCK_WALL blocks[adjLoc.ix][adjLoc.iy] = BLOCK_WALL elif blocks[loc.ix][loc.iy] == BLOCK_UNALLOCATED: # No seed there; plant one. newSeeds[adjLoc] = seed.Seed(curSeed.owner, curSeed.life - 1, curSeed.age + 1) elif (blocks[adjLoc.ix][adjLoc.iy] != BLOCK_EMPTY and deadSeeds.has_key(adjLoc)): # Hit a wall containing a dead seed; try to merge # with it. newSeed = self.tryMergeDeadSeed(curSeed, adjLoc, seeds, deadSeeds, numCols, numRows) if newSeed is not None: newSeeds[adjLoc] = newSeed del deadSeeds[adjLoc] blocks[adjLoc.ix][adjLoc.iy] = BLOCK_UNALLOCATED # Done expanding; zero our location if we didn't wall it # earlier. if blocks[loc.ix][loc.iy] != BLOCK_WALL: blocks[loc.ix][loc.iy] = BLOCK_EMPTY deadSeeds[loc] = curSeed seeds = newSeeds return (blocks, deadSeeds)
Pyrel - an open-source rewrite of the Angband roguelike game in Python.
Player (42)
Joined: 12/27/2008
Posts: 873
Location: Germany
I don't know Python and couldn't understand much of what you coded, but I couldn't spot any indication that you used a queue as an auxiliary structure. Using a queue can make your code more efficient, because the algorithm seems to be a variation of a breadth-first search. If you actually used a queue there, then ignore this post. :P
Player (36)
Joined: 9/11/2004
Posts: 2630
Offtopically, I wrote a few "useful" utility functions for C++ in an effort to make it more pythonic. (I'm aware that this is probably the C++ equivalent of "I made a poopy mommy~!" But I thought someone might find it useful). The first one is zip. It returns a container of tuples, the only limitation is that the container of the passed in objects should be the same. For clarification, if you run:
Language: cpp

std::deque<std::string> a; std::deque<int> b; std::deque<std::multimap> c; std::deque<std::pair<int, std::string>> d; for (auto i : zip(a, b, c, d)) { }
You'll be iterating over a std::tuple<std::string, int, std::multimap, std::pair<int, std::string>> with the elements paired up in the manner that you'd expect. Unlike the boost::zip, this does not require that the lists be the same size, it functions like python zip in that it's automatically truncated to the shortest length.
Language: cpp

template <class ContainerA> unsigned commonLength(unsigned len, const ContainerA &first) { unsigned firstLen = first.size(); if (len > firstLen) { len = firstLen; } return len; } template <class ContainerA, class... Containers> unsigned commonLength(unsigned len, const ContainerA &first, const Containers&... rest) { unsigned firstLen = first.size(); if (len > firstLen) { len = firstLen; } return commonLength(len, rest...); } template <template <typename...> class Container, typename TypeA> std::tuple<TypeA> getTupleFrom(unsigned index, Container<TypeA> const& first) { return std::tuple<TypeA>(first[index]); } template <template <typename...> class Container, typename TypeA, typename... Types> std::tuple<TypeA, Types...> getTupleFrom(unsigned index, Container<TypeA> const& first, Container<Types> const&... rest) { return std::tuple_cat(std::tuple<TypeA>(first[index]), getTupleFrom<Container, Types...>(index, rest...)); } template <template <typename...> class Container, typename... Types> Container<std::tuple<Types...>> zip(Container<Types> const&... args) { unsigned len = commonLength(std::numeric_limits<unsigned>::max(), args...); Container<std::tuple<Types...>> res; std::tuple<Types...> item; for (unsigned i=0; i<len; i++) { item = getTupleFrom<Container, Types...>(i, args...); res.push_back(item); } return res; }
Next is a wrapper, to make reverse walking of a container through a range based for easy/possible.
Language: cpp

for (auto i : reverseIterate(mylist)) { }
Language: cpp

template <typename Iterable> struct ReverseWrapper { private: Iterable& m_iterable; public: ReverseWrapper(Iterable& iterable) : m_iterable(iterable) {} auto begin() const ->decltype(m_iterable.rbegin()) { return m_iterable.rbegin(); } auto end() const ->decltype(m_iterable.rend()) { return m_iterable.rend(); } }; template <typename Iterable> ReverseWrapper<Iterable> reverseIterate(Iterable& list) { return ReverseWrapper<Iterable>(list); }
Finally, range. It works like python's range statement in that it returns a list or other container for the given range. I haven't bothered to make a functor based version (like an xrange generator) yet. This is because my zip function doesn't work with that concept yet. If I ever need it I'll write it and put it here.
Language: cpp

template <template <typename...> class Container, typename Numeric> Container<Numeric> range(Numeric min, Numeric max, Numeric diff) { Container<Numeric> res; while ((diff > 0 && max > min) || (diff < 0 && min > max)) { res.push_back(min); min += diff; } return res; } template <template <typename...> class Container, typename Numeric> Container<Numeric> range(Numeric max) { Container<Numeric> res; int min = 0; while (max > min) { res.push_back(min); min += 1; } return res; } template <template <typename...> class Container, typename Numeric> Container<Numeric> range(Numeric min, Numeric max) { Container<Numeric> res; while (max > min) { res.push_back(min); min += 1; } return res; }
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Emulator Coder
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
OmnipotentEntity: Nice bit of code you got there. Just one question: What does it have to do with the topic at hand, namely how C++ exceptions work on Linux, and how to make said code portable. Edit: Thread moved.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
Player (36)
Joined: 9/11/2004
Posts: 2630
Well, I said it was offtopic. I didn't think it was worth a new thread, and your last post also seemed off topic (portability vs problem with exception handling), so I assumed this was the new "hey look, neat C++ stuff" thread.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Emulator Coder
Joined: 3/9/2004
Posts: 4588
Location: In his lab studying psychology to find new ways to torture TASers and forumers
OmnipotentEntity wrote:
and your last post also seemed off topic (portability vs problem with exception handling)
My last post links to an article which explains in great detail how to properly fix the problem I was having.
Warning: Opinions expressed by Nach or others in this post do not necessarily reflect the views, opinions, or position of Nach himself on the matter(s) being discussed therein.
Player (36)
Joined: 9/11/2004
Posts: 2630
Ah, I did not pick up on that from the text of the linked post at all.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
OmnipotentEntity wrote:
The first one is zip. It returns a container of tuples, the only limitation is that the container of the passed in objects should be the same.
If I'm reading the code correctly, zip() is copying all the containers into a tuple. This can be quite a heavy operation and often something one doesn't want to do (especially if one's just iterating over some containers, something that usually doesn't involve copying anything).
Next is a wrapper, to make reverse walking of a container through a range based for easy/possible.
If I'm reading the code correctly, it doesn't work for static arrays (even though a range-based for loop normally does).
Player (36)
Joined: 9/11/2004
Posts: 2630
Warp wrote:
OmnipotentEntity wrote:
The first one is zip. It returns a container of tuples, the only limitation is that the container of the passed in objects should be the same.
If I'm reading the code correctly, zip() is copying all the containers into a tuple. This can be quite a heavy operation and often something one doesn't want to do (especially if one's just iterating over some containers, something that usually doesn't involve copying anything).
You're reading it correctly, it functions the same way that python's zip function does. If it becomes a performance problem within my application, I'll replace it with a functor based generator.
Warp wrote:
Next is a wrapper, to make reverse walking of a container through a range based for easy/possible.
If I'm reading the code correctly, it doesn't work for static arrays (even though a range-based for loop normally does).
Yes, you're reading it correctly, it works on anything that can be iterated C++ style. A static array no, a std::array yes.
Build a man a fire, warm him for a day, Set a man on fire, warm him for the rest of his life.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
OmnipotentEntity wrote:
Yes, you're reading it correctly, it works on anything that can be iterated C++ style. A static array no, a std::array yes.
My point is that I think the code can be expanded to support traversing static arrays in reverse too, using the same syntax (to make it more in line with the regular functionality of a range-based for).
Joined: 3/30/2007
Posts: 44
http://crackmes.us/read.py?id=574 Idea is to make a valid key for the desired username. Warning: not for reversing beginners. Clue: mudlord = 24E4781C-82AF5466-91229927-F89F083F