Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I'm new to functional programming and I'm trying rewrite some code to make it more functional-ish to grasp the concepts. Just now I've discovered Array.reduce() function and used it to create an object of arrays of combinations (I've used for loop before that). However, I'm not sure about something. Look at this code:

const sortedCombinations = combinations.reduce(
    (accum, comb) => {
        if(accum[comb.strength]) {
            accum[comb.strength].push(comb);
        } else {
            accum[comb.strength] = [comb];
        }

        return accum;
    },
    {}
);

Obviously, this function mutates its argument accum, so it is not considered pure. On the other hand, the reduce function, if I understand it correctly, discards accumulator from every iteration and doesn't use it after calling callback function. Still, it's not a pure function. I can rewrite it like this:

const sortedCombinations = combinations.reduce(
    (accum, comb) => {
        const tempAccum = Object.assign({}, accum);
        if(tempAccum[comb.strength]) {
            tempAccum[comb.strength].push(comb);
        } else {
            tempAccum[comb.strength] = [comb];
        }

        return tempAccum;
    },
    {}
);

Now, in my understanding, this function is considered pure. However, it creates a new object every iteration, which consumes some time, and, obviously, memory.

So the question is: which variant is better and why? Is purity really so important that I should sacrifice performance and memory to achieve it? Or maybe I'm missing something, and there is some better option?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
461 views
Welcome To Ask or Share your Answers For Others

1 Answer

TL; DR: It isn't if you own the accumulator.


It's quite common in JavaScript to use the spread operator to create nice looking one-liner reducing functions. Developers often claim that it also makes their functions pure in the process.

const foo = xs => xs.reduce((acc, x) => ({...acc, [x.a]: x}), {});
//------------------------------------------------------------^
//                                                   (initial acc value)

But let's think about it for a second... What could possibly go wrong if you mutated acc? e.g.,

const foo = xs => xs.reduce((acc, x) => {
  acc[x.a] = x;
  return acc;
}, {});

Absolutely nothing.

The initial value of acc is an empty literal object created on the fly. Using the spread operator is only a "cosmetic" choice at this point. Both functions are pure.

Immutability is a trait not a process per se. Meaning that cloning data to achieve immutability is most likely both a naive and inefficient approach to it. Most people forget that the spread operator only does a shallow clone anyway!

I wrote this article a little while ago where I claim that mutation and functional programming don't have to be mutually exclusive and I also show that using the spread operator isn't a trivial choice to make.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...