Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
Time is the New Memory
September 21, 2009 9:00 AM
|
In this Artima interview from the JVM Languages Summit, Rich Hickey, the creator of the Clojure programming language, discusses his perspective on mutable state and what programming languages should do about it.
Read this Artima.com interview: http://www.artima.com/articles/hickey_on_time.html What is your opinion of Hickey's notion that the problems people usually associate with mutable state are problems of time? What do you think of the idea that you should write as much of your program as possible with immutable objects (or data) and pure functions? |
Posts: 98 / Nickname: achilleas / Registered: February 3, 2005 2:57 AM
Re: Time is the New Memory
September 22, 2009 2:53 AM
|
> In this Artima interview from the JVM Languages Summit,
> Rich Hickey, the creator of the Clojure programming > language, discusses his perspective on mutable state and > what programming languages should do about it. > > Read this Artima.com interview: > > http://www.artima.com/articles/hickey_on_time.html > > What is your opinion of Hickey's notion that the problems > people usually associate with mutable state are problems > of time? What do you think of the idea that you should > write as much of your program as possible with immutable > objects (or data) and pure functions? I think the Pure Functional Programming people are leaving in a fantasy land. They speak about Pure FP as if it makes it impossible to introduce bugs, which, of course, it's not the case at all. They blame the concept of "state" for everything, which is absurd, to say the least. Their programs are full of monadic type declarations, since almost everything requires state, and when they are asked to produce a quicksort implementation that is as fast as in-place quicksort, or when they are asked to solve the problem of tree (parent points to child, child points to parent), they mumble something about some very strange patterns (they call it the zip pattern or something like that) that requires you to interleave your code (whatever that is: a calculation, event management in a gui app, packet reception from the network) with the tree manipulation structure, resulting in a big spaghetti mess. The advantages of functional programming include the strong and static type system and the functions as first class entities. Pureness is a straitjacket and a severe disadvantage. Here is an example of the madness (quote taken from the article): A large system that's a graph of mutable objects is just very hard to understand. And in Pure FP, a large computation that's a graph of function calls is also very hard to understand. The wrong values appear at function parameters instead of object members. There is no difference in reality between objects that are inconsistent internally and functions with wrong parameters. In Pure FP, the problem of wrong values is transferred from objects to functions. The real issue behind programming is not time or pureness or immutability. First of all, there is no such thing as an impure function, unless the function reads some value from the hardware. Each and every procedure/function/subroutine/calculation in a program will give the exact same result, if given the exact same parameters. And by parameters, I mean not only the values passed to it, but implicit parameters like global variables. External variables to a function are input variables to the function as well. Given a function F and and input I, where I is P (parameters passed to the function) and G (variables accessible in the context of F), the output is always O: I = {P, G} F(I) = O The real issue in programming is partial functions. Most, if not all, programming functions, allow for partial functions, i.e. for functions where not all the possible values of the input are mapped to a result. For example (to take an example from the article), A Date object is inconsistent because of violation of the mathematical function that defines the date. When we have month = February and then we set the day = 31, we have violated the definition of Date. There is no date February, 31. It's not a problem of time, it's a problem of partiality: at that specific point of computation, we have violated the definition of Date. The problem of partial functions extends to everything. For example, the function 'fclose' is defined as a function that takes as input an open file, not a closed one. But there is no check from the compiler if we pass a closed file or not. Another example is functions that do not accept null pointers. The input set of such a function does not include the null value, but the compiler does not complain. FP languages somewhat address the problem by using Maybe types, but the solution does not cover the whole range of problems. For example, it's not possible to say that the input pointer for a function can take values within the range X..Y. One solution to the problem of partial functions is to make values as types. A range type from X to Y, for example, can only statically accept a value that is within the range; at run time, if we have a value that may be out of range, we statically promote the value to the X..Y if the value is inside the range, and we declare an alternative action when the value is outside of the range. For example (in C++ parlance):
Incidentally, using values as types allows for a lot of optimizations; for example, in the above, bounds checking is redundant. I apologize for the long reply, but some things need to be said, in my opinion. It ticks me off when I see people can't recognize the real nature of the problem in programming :-). |
Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
Re: Time is the New Memory
September 22, 2009 0:18 PM
|
> > What do you think of the idea that you should
> > write as much of your program as possible with > immutable > > objects (or data) and pure functions? > > I think the Pure Functional Programming people are leaving > in a fantasy land. They speak about Pure FP as if it makes > it impossible to introduce bugs, which, of course, it's > not the case at all. They blame the concept of "state" for > everything, which is absurd, to say the least. > I think there's a difference between the idea of "writing much of your program with immutable objects (or data) and pure functions" and "pure functional programming." What I was asking about in my question was the former, which is what I think of as a functional style, or attitude. It includes preferring pure functions over functions with side effects. Also, Rich Hickey didn't come across as a functional programming purist in his keynote or interview. One bit I left out of the article was about pure functional programming. Hickey said that the problem with pure functional programming is that most programs that we write aren't pure functions. Some are, but most aren't. One example that he gave of a kind of program that's close to a pure function is a compiler. It takes input, processes it, and writes output. But if you have a web app that allows users to log in, then you've got state to deal with, and that's not such a natural fit for pure functional approach. So what Hickey was suggesting I think is more of a a hybrid notion in which you try and use a functional style for most of your program, but for the stateful parts make it stateful. I try to do that in Scala as well, and it's the main way my programming style changed as a result of learning Scala, which I described here: http://www.artima.com/scalazine/articles/programming_style.html The main difference I see between what I do nowadays in Scala and what Rich Hickey was claiming is important in his keynote is the degree to which the language enforces this approach. Scala makes it easier and more natural to use immutable objects than Java does, but it doesn't make it any harder to make mutable ones. Clojure by contrast, if you just stick with Clojure and don't drop down into Java, pretty much makes it impossible to mutate state (if my understanding is correct) except via the "time constructs," as he calls them, which Clojure provides. Because Clojure is running on the JVM, you could always do mutable stuff as much as you want from Clojure by calling into Java. I see Clojure's not letting you mutate state in unmanaged ways (except by dropping down into Java) as analogous to Java's not giving you any way to free memory (except by dropping down into C via JNI). Scala's approach to mutable state, by contrast, is more analogous to C++ approach to memory in that you can call delete, but if you want you can use a garbage collection library. Scala's approach to this is to let people us libraries and compiler plugins. |
Posts: 1 / Nickname: 54924 / Registered: April 7, 2008 2:50 PM
Re: Time is the New Memory
September 22, 2009 1:58 PM
|
I see Clojure as a bit of a bait-and-switch; you come in thinking "I can interoperate with Java to my heart's content!" but very quickly you don't want to and when you have to do some interop, you stuff it into a side namespace to encapsulate the inherent Java ugliness. For example, I had a namespace with functions for XML parsing, and I seperated those out so that the result of XML parsing was a stream of tokens that were fully Clojure, and all the awkward Java class proxying and callback state stuff was inside the namespace.
That being said, people write Swing GUIs in Clojure (and quickly build up a library of functions and macros to make that clean). Clojure lets you create and invoke methods on Java objects as much as necessary, which is of course impure and unsafe. You can call any Java method, and even use the set! special form to update public static or instance fields. But I have yet to need to do that in my own code.I actually like the lack of options in Clojure vs. Scala's more flexible (and therefore, more complex) approach. I'm a better coder for adopting functional practices over the last few years, but that has accelerated as I've been using Clojure for the last 6 months or so. Occasionally you see something awkward in Clojure that's simple in Java ... until you realize that there is a idiomatic way to accomplish what you want in Clojure that simply is different from Java ... which makes sense for a different language. Total and predictable immutability combined with lazy evaluation is a powerful combination. |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 22, 2009 3:47 AM
|
> I apologize for the long reply, but some things need to be
> said, in my opinion. It ticks me off when I see people > can't recognize the real nature of the problem in > programming :-). I'm not sure using examples in C++ to illustrate one's argument is particularly persuasive. ;-) Besides that, there's a lot of unsubstantiated assertions there, and a lot of misunderstanding of how things work in many FP languages. Some thoughts in no particular order: - I don't think anyone has any problem with 'state'. Rather, the issue is with statefulness with undefined or poorly-defined semantics. Monads and STM and CAS and, and, and, are all ways of defining the semantics of how state changes. - Some flavor(s) of Haskell emit C, and are as fast as C for many tasks. Users of other languages are happy to trade some performance for better semantics and development processes. Different strokes, and all. - For tree manipulations, including maintaining references to parents from child nodes in languages with persistent data structures, see Huet's zippers and their descendants (pun intended! :-P) - Not all FP languages have strong+static type systems (e.g. clojure doesn't) - "A graph of mutable objects is hard to understand" because of the undefined or poorly-defined semantics of those objects' state. Programs consisting of functions and values are likely just as complicated (in some formal sense of numbers of relationships between entities, etc), but more understandable because of the well-defined semantics. Finally, I'm baffled that you appear to be a proponent of very aggressive strong+static typing, given your discussion of "values as types", but are simultaneously unimpressed with FP languages that take those ideas the furthest (Haskell and the ML family, to my understanding). Cheers, - Chas |
Posts: 98 / Nickname: achilleas / Registered: February 3, 2005 2:57 AM
Re: Time is the New Memory
September 22, 2009 6:10 AM
|
> I'm not sure using examples in C++ to illustrate one's
> argument is particularly persuasive. ;-) The number of people that will understand a C++ example is greater than the number of people that will understand Haskell, for example. It's also easier for me as well. > > Besides that, there's a lot of unsubstantiated assertions > there, and a lot of misunderstanding of how things work in > many FP languages. > > Some thoughts in no particular order: > > - I don't think anyone has any problem with 'state'. > Rather, the issue is with statefulness with undefined or > poorly-defined semantics. Monads and STM and CAS and, > and, and, are all ways of defining the semantics of how > state changes. I suppose you do not read LtU frequently then :-). > > - Some flavor(s) of Haskell emit C, and are as fast as C > for many tasks. Undoubtedly. But my point is that C (and its derivatives) can not be abandoned, because there are things that pure FP languages can not do (like in-place quicksort). > Users of other languages are happy to > trade some performance for better semantics and > development processes. Different strokes, and all. And that leads to unresponsive, slow and bloated applications. > > - For tree manipulations, including maintaining references > to parents from child nodes in languages with persistent > data structures, see Huet's zippers and their descendants > (pun intended! :-P) Yes, that's what I was talking about, the zipper structure. Thanks but no thanks, I'd like my code simple. > > - Not all FP languages have strong+static type systems > (e.g. clojure doesn't) I never said that all do, but the main focus these days is on Haskell. > > - "A graph of mutable objects is hard to understand" > because of the undefined or poorly-defined semantics of > those objects' state. Programs consisting of functions > and values are likely just as complicated (in some formal > sense of numbers of relationships between entities, etc), > but more understandable because of the well-defined > semantics. The same well defined semantics can be used on objects. > > Finally, I'm baffled that you appear to be a proponent of > very aggressive strong+static typing, given your > discussion of "values as types", but are simultaneously > unimpressed with FP languages that take those ideas the > furthest (Haskell and the ML family, to my > understanding). That's because strong+static typing is good, lack of updates is not. In my opinion, that is. |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 22, 2009 6:47 AM
|
> >
> > - Some flavor(s) of Haskell emit C, and are as fast as > C > > for many tasks. > > Undoubtedly. But my point is that C (and its derivatives) > can not be abandoned, because there are things that pure > FP languages can not do (like in-place quicksort). We're definitely talking past each other because of our (likely very different) use cases and specialties. No one said that they should be abandoned, but the same tactics that are relied upon in systems programming simply do not scale to solving different classes of problems. The fact that many environments are happy enough to get the performance benefits of lower-level languages when necessary (e.g. [almost] no one writes sort routines in clojure, you just "shell out" to Java, and wrap the result in an immutable wrapper in O(1)) while working with higher-level primitives the rest of the time. I promise you that the bottlenecks of most programs have nothing to do with the speed of sorting routines. > > Users of other languages are happy to > > trade some performance for better semantics and > > development processes. Different strokes, and all. > > And that leads to unresponsive, slow and bloated > applications. Per the above, in many fields, that leads to applications that *work*. If I had to contemplate building my current projects in C++, Java, et al., I simply would not have bothered -- the amount of manual bookkeeping would have drowned us for years. > > - "A graph of mutable objects is hard to understand" > > because of the undefined or poorly-defined semantics of > > those objects' state. Programs consisting of functions > > and values are likely just as complicated (in some > formal > > sense of numbers of relationships between entities, > etc), > > but more understandable because of the well-defined > > semantics. > > The same well defined semantics can be used on objects. But where? If there's an approach that provides similar sorts of guarantees around mutable objects, I'd be happy to educate myself. - Chas |
Posts: 98 / Nickname: achilleas / Registered: February 3, 2005 2:57 AM
Re: Time is the New Memory
September 23, 2009 2:01 AM
|
> > >
> > > - Some flavor(s) of Haskell emit C, and are as fast > as > > C > > > for many tasks. > > > > Undoubtedly. But my point is that C (and its > derivatives) > > can not be abandoned, because there are things that > pure > > FP languages can not do (like in-place quicksort). > > We're definitely talking past each other because of our > (likely very different) use cases and specialties. > > No one said that they should be abandoned, but the same > tactics that are relied upon in systems programming simply > do not scale to solving different classes of problems. > The fact that many environments are happy enough to get > t the performance benefits of lower-level languages when > necessary (e.g. [almost] no one writes sort routines in > clojure, you just "shell out" to Java, and wrap the result > in an immutable wrapper in O(1)) while working with > higher-level primitives the rest of the time. You are talking as if systems programming languages can not be improved at all. I disagree. You can have a systems programming language that is as high level as possible. You can have your cake and eat it as well. > > I promise you that the bottlenecks of most programs have > nothing to do with the speed of sorting routines. You forget the accumulating costs of using high-level features. > > > > Users of other languages are happy to > > > trade some performance for better semantics and > > > development processes. Different strokes, and all. > > > > And that leads to unresponsive, slow and bloated > > applications. > > Per the above, in many fields, that leads to applications > that *work*. If I had to contemplate building my current > projects in C++, Java, et al., I simply would not have > bothered -- the amount of manual bookkeeping would have > drowned us for years. Please give us an example of such a project. > > > > - "A graph of mutable objects is hard to understand" > > > because of the undefined or poorly-defined semantics > of > > > those objects' state. Programs consisting of > functions > > > and values are likely just as complicated (in some > > formal > > > sense of numbers of relationships between entities, > > etc), > > > but more understandable because of the well-defined > > > semantics. > > > > The same well defined semantics can be used on objects. > > But where? If there's an approach that provides similar > sorts of guarantees around mutable objects, I'd be happy > to educate myself. The same sort of guarantees that exist for FP also exist for objects - it's the type system, not FP. Apply the type system to objects and, hey presto, you can have similar guarantees. In other words, the extremely static and strong type system of modern FP languages can also be applied to other languages as well. The separation between FP/OO is artificial and invalid, because the two concepts (functions as first class entities and objects) are complementary, not overlapping. |
Posts: 26 / Nickname: cpurdy / Registered: December 23, 2004 0:16 AM
Re: Time is the New Memory
September 22, 2009 1:52 PM
|
> For example (to take an example from the article), A Date
> object is inconsistent because of violation of the > mathematical function that defines the date. When we have > month = February and then we set the day = 31, we have > violated the definition of Date. There is no date > February, 31. It's not a problem of time, it's a problem > of partiality: at that specific point of computation, we > have violated the definition of Date. It seems inconceivable to me that we could build any substantial arguments or analogies on top of an analysis of java.util.Date, other than to simply conclude that it is poorly designed for its stated purpose. To wit, back in the days of the USSR, a worker goes to see the doctor for a pain. The doctor asks him, "What is the problem?" The worker shows him a spot on his arm and says, "Doctor, it hurts when I press here." The doctor says, "Well, don't press there." (Note: It's a Russian joke, so that's the whole thing; there is no punch line, but now you are supposed to laugh.) So what about java.util.Date? Doctor, it hurts .. Peace, Cameron Purdy | Oracle Coherence |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 22, 2009 4:23 PM
|
> It seems inconceivable to me that we could build any
> substantial arguments or analogies on top of an analysis > of java.util.Date, other than to simply conclude that it > is poorly designed for its stated purpose. The specific implementation or API is irrelevant -- the point is that any object that does not have value semantics (that is, every "pojo" or similar object that is just a bag of mutable values with undefined semantics) suffers from the same issues as j.u.Date. Make your own class with three int members, and the same points apply. - Chas |
Posts: 98 / Nickname: achilleas / Registered: February 3, 2005 2:57 AM
Re: Time is the New Memory
September 23, 2009 2:10 AM
|
> > It seems inconceivable to me that we could build any
> > substantial arguments or analogies on top of an > analysis > > of java.util.Date, other than to simply conclude that > it > > is poorly designed for its stated purpose. > > The specific implementation or API is irrelevant -- the > point is that any object that does not have value > semantics (that is, every "pojo" or similar object that is > just a bag of mutable values with undefined semantics) > suffers from the same issues as j.u.Date. Make your own > class with three int members, and the same points apply. > > - Chas But the problem is not mutability. For example, make one class with 3 int bitfields that span exactly 32 bits; now there is no problem mutating the class, because the bitfields will be written/read in one go. Here is your problematic class: public class Foo { private int member1; private int member2; private int member3; void set(int m1, int m2, int m3) { member1 = m1; member2 = m2; member3 = m3; } } The above class has a consistency problem when called from different threads. But what about this class? public class Bar { private int member; void set(int m1, int m2, int m3) { member = m1 | m2 | m3; } } The above does not have a problem, because a 32-bit word will be set with one instruction, and so invariant violation can not be achieved. So the problem clearly is not mutability. |
Posts: 20 / Nickname: raoulduke / Registered: April 14, 2006 11:48 AM
Re: Time is the New Memory
October 2, 2009 2:07 PM
|
> The above does not have a problem, because a 32-bit word
> will be set with one instruction, and so invariant > violation can not be achieved. > So the problem clearly is not mutability. augh. i really respect Achilleas but... :-) we all need to do a better job understanding what the issues are with concurrency! otherwise we are just saying stuff that is plain wrong. the problems with concurrency are multi-levelled. at the low-level it is "internal" race conditions, which is what Achilleas was pointing out. but at the higher level, it is "external" race conditions because one often wants to test-and-set, and if the test and the set are not in the same "transaction" then you get that "external" race that can make things fubar. sincerely. |
Posts: 3 / Nickname: i30817 / Registered: June 26, 2007 0:24 PM
Re: Time is the New Memory
September 23, 2009 9:41 AM
|
> One solution to the problem of partial functions is to
> make values as types. A range type from X to Y, for > example, can only statically accept a value that is within > the range; at run time, if we have a value that may be out > of range, we statically promote the value to the X..Y if > the value is inside the range, and we declare an > alternative action when the value is outside of the range. > For example (in C++ parlance): > I think X10 is trying to solve this... I am convinced X10 is the real java++ after reading the specification (not scala). |
Posts: 3 / Nickname: i30817 / Registered: June 26, 2007 0:24 PM
Re: Time is the New Memory
September 23, 2009 9:53 AM
|
Posts: 20 / Nickname: raoulduke / Registered: April 14, 2006 11:48 AM
Re: Time is the New Memory
October 2, 2009 2:15 PM
|
> > One solution to the problem of partial functions is to
> > make values as types. A range type from X to Y, for > I think X10 is trying to solve this... I am convinced X10 > is the real java++ after reading the specification (not > scala). pretty please, anybody who is interested in this, go read up on <a href="http://www.google.com/search?q=site%3Alambda-the-ultimate.org+dependent+t ypes">dependent types</a>. Achilleas knows of that stuff i believe. and then go read (which i cannot find right now bloody heck) Xavier Leroy's claim that it just makes your code suck, and you should just use an external prover instead. of course he's all about Coq so maybe it should be taken with a grain of salt. |
Posts: 3 / Nickname: rkitts / Registered: January 27, 2003 4:30 PM
Re: Time is the New Memory
September 25, 2009 9:12 AM
|
This sounds promising to me.
It seems to me that all advancements with respect to languages have somehow been rooted in removing choice. GC, a now trite example I think, said to developer's "Managing your own memory is hard. No, really, it's hard. Trust me. So I am not going to let you do it." C said, "Dealing with registers (say) is hard. No really, it's hard. Trust me. So I'm not going to let you do it." (Yes, I know the old (useless) register keyword. Please, this is a forest discussion not a trees discussion). This perhaps explains why I don't find any new language that I've seen so far in the wild as particularly compelling. For example, and this is not a troll, Scala seems to me to be the C++ of Java. It certainly allows me to do some things more easily than I can in java (easily might simply mean succinctly) but it doesn't force me into something that makes my life easier. Indeed, it seems quite the opposite. I'm not sure what's next. I'm not sure where Hickey's perspective might eventually lead, if anywhere, but I'm seriously attracted to the that's-hard-so-I-disallow-it proposition it suggests. Finally, I have to say I'm not convinced that are many, if any, mechanical changes that can be made to software development that will make it better (a multi-dimensional statement to be sure). The truly hard problems I have (had) to deal with almost never have to do with my individual ability to express myself. Rather they are around others understanding my expression in an unambiguous fashion. But that's probably more a lack of imagination on my part rather than a truism. ---Rick |
Posts: 20 / Nickname: raoulduke / Registered: April 14, 2006 11:48 AM
Re: Time is the New Memory
October 2, 2009 2:21 PM
|
> It seems to me that all advancements with respect to
> languages have somehow been rooted in removing choice. GC, > a now trite example I think, said to developer's "Managing > your own memory is hard. No, really, it's hard. Trust me. > So I am not going to let you do it." some people have said that <a href="http://lambda-the-ultimate.org/node/2990">gc is like stm</a>. there are probably some high-level parallels to be drawn. memory and resource management are more complicated than just leaving it up to the GC. people forget that, and we end up with really crappy code. and on the whole i really like GC, don't get me wrong. |
Posts: 18 / Nickname: cgross / Registered: October 16, 2006 3:21 AM
Re: Time is the New Memory
September 21, 2009 9:36 PM
|
And yet, despite it all, the vast majority of useful software is built using the old imperative/mutable warhorses.
Plato may be rolling over in his grave (and good, the totalitarian bugger) but Aristotle is laughing. Cheers, Carson |
Posts: 3 / Nickname: rkitts / Registered: January 27, 2003 4:30 PM
Re: Time is the New Memory
September 25, 2009 9:17 AM
|
> And yet, despite it all, the vast majority of useful
> software is built using the old imperative/mutable > warhorses. > Sure, but this is sort of a horrible argument isn't it? I could just as easily have said with respect to the car, "And yet, despite it all, the vast majority of transport is provided by the good ol' horse and wagon." Horses totally, totally worked. The problem was all the shit that came with the solution. ---Rick |
Posts: 18 / Nickname: cgross / Registered: October 16, 2006 3:21 AM
Re: Time is the New Memory
September 28, 2009 9:29 PM
|
> > And yet, despite it all, the vast majority of useful
> > software is built using the old imperative/mutable > > warhorses. > > > > Sure, but this is sort of a horrible argument isn't it? I > could just as easily have said with respect to the car, > "And yet, despite it all, the vast majority of transport > is provided by the good ol' horse and wagon." Horses > totally, totally worked. The problem was all the shit that > came with the solution. I'm just making the observation that, as far as side-effect free functional programming goes, people have been advocating it for decades now and, empirically, imperative programming languages have been where most of the application work gets done. Maybe the right tipping point hasn't been reached. Maybe there will be a cascade to that style of programming when everyone realizes that they get parallelization "for free." Maybe that's a total illusion, since the vast majority of enterprise apps spend a lot of time down in the needly bits of domain models, with small, sequential problems that don't lend themselves to parallelization anyway. I don't know. Take it back to the title: "Time is the new memory." When memory got solved, I barely even noticed: I kept making the same mistakes I always had with memory management, but I got to make them everywhere! When I see a parallel solution of similar costs to the end user, *then* I'll be impressed. Changing our entire programming model to solve a problem most of us don't have? No thanks. OTOH, you know my take on modern cars... :) Cheers, Carson |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 29, 2009 2:58 AM
|
> Take it back to the title: "Time is the new memory." When
> memory got solved, I barely even noticed: I kept making > the same mistakes I always had with memory management, but > I got to make them everywhere! When I see a parallel > solution of similar costs to the end user, *then* I'll be > impressed. Changing our entire programming model to solve > a problem most of us don't have? You'll have that problem *today*. Your code will be going along, check that the state of some object is Bar, carry on, and then fail or do the wrong thing because the state of the "same object" is Baz 14 lines later as a side effect of one of the 146 methods your code called in the interim that you didn't know about or forgot about when you wrote the code originally. Just sayin'. :-) - Chas |
Posts: 18 / Nickname: cgross / Registered: October 16, 2006 3:21 AM
Re: Time is the New Memory
September 29, 2009 8:55 AM
|
> You'll have that problem *today*. Your code will be going
> along, check that the state of some object is Bar, carry > on, and then fail or do the wrong thing because the state > of the "same object" is Baz 14 lines later as a side > effect of one of the 146 methods your code called in the > interim that you didn't know about or forgot about when > you wrote the code originally. > > Just sayin'. :-) Sorry, man. It just doesn't come up that much in my day to day programming. Since I'm down in the gronky bits of domain models most of the time anyway, there just isn't a lot of parallelism to be extracted. There are certainly places where I might adopt a very defensive programming model wrt to threading, but, for the most part, the web server threading model and optimistic concurrency at the database layer do me just fine. No fuss, no muss, no Haskell. Cheers, Carson |
Posts: 18 / Nickname: cgross / Registered: October 16, 2006 3:21 AM
Re: Time is the New Memory
September 29, 2009 8:59 AM
|
Ugh. I hate that I can't edit posts:
Second sentence should start "And since" rather than "Since" (I'm making a secondary point since it doesn't directly respond to... Oh never mind.) Cheers, Carson |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 29, 2009 10:35 AM
|
> > You'll have that problem *today*. Your code will be
> going > > along, check that the state of some object is Bar, > carry > > on, and then fail or do the wrong thing because the > state > > of the "same object" is Baz 14 lines later as a side > > effect of one of the 146 methods your code called in > the > > interim that you didn't know about or forgot about when > > you wrote the code originally. > > > > Just sayin'. :-) > > Sorry, man. It just doesn't come up that much in my day > to day programming. Since I'm down in the gronky bits of > domain models most of the time anyway, there just isn't a > lot of parallelism to be extracted. The point of my last message was that parallelism is a similar but unrelated issue to the scenario I described, which can and does happen in single-threaded code all the time. - Chas |
Posts: 18 / Nickname: cgross / Registered: October 16, 2006 3:21 AM
Re: Time is the New Memory
September 30, 2009 9:47 AM
|
> The point of my last message was that parallelism is a
> similar but unrelated issue to the scenario I described, > which can and does happen in single-threaded code all the > time. Huh. I guess I see your point. But when that comes up I typically find it pretty easy to set a breakpoint on the thing in question and see why it got updated. The only time I'll occasionally find myself pulling my hair out is when there is a threading issue, which is when I'll switch over to a more defensive programming model to isolate the, er, isolation. Same thread updates aren't something that I find hurting me very often, and seem pretty easy to track down. Cheers, Carson |
Posts: 3 / Nickname: marsilya / Registered: June 9, 2008 8:05 PM
Re: Time is the New Memory
September 21, 2009 11:38 PM
|
> If there was a way to glance at a date, at one point in time, there would be no problem
Of course there is and it is as simple as that: Make a monitor for date. In java it is the key synchronization concept. |
Posts: 7 / Nickname: skrisz / Registered: March 16, 2009 2:31 AM
Re: Time is the New Memory
September 22, 2009 0:17 AM
|
> Of course there is and it is as simple as that: Make a
> monitor for date. In java it is the key synchronization > concept. Yes but the question still remains: "How do you know you got it right?" ;-) Peace K |
Posts: 3 / Nickname: marsilya / Registered: June 9, 2008 8:05 PM
Re: Time is the New Memory
September 22, 2009 0:30 AM
|
> Yes but the question still remains: "How do you know you got it right?" ;-)
if you make every public method in class date synchronized + if you allow access to attribute only via methods, than I strongly assume that there cannot occur any concurrency problems. |
Posts: 2 / Nickname: roblally / Registered: December 16, 2007 8:24 AM
Re: Time is the New Memory
September 22, 2009 2:53 AM
|
>if you make every public method in class date synchronized + if you allow access to attribute only via methods, than I strongly assume that there cannot occur any concurrency problems.
If you remove any potential for concurrency then you certainly cut down on potential concurrency problems. The point is that the objective isn't to eliminate concurrency, it is to enable concurrency whilst maintaining consistency and correctness. Your suggestion of synchronising every method on date won't make this code safe: if(date.getDay().equals(3)) date.setDay(4) ... because the synchronisation of the date object doesn't ensure that another thread can't change it between the get and the set method. This is one of the problems that Clojure is trying to solve. |
Posts: 3 / Nickname: marsilya / Registered: June 9, 2008 8:05 PM
Re: Time is the New Memory
September 22, 2009 4:35 AM
|
> won't make this code safe:
> if(date.getDay().equals(3)) date.setDay(4) > ... because the synchronisation of the date object > doesn't ensure that another thread can't change it > between the get and the set method. This is one of > the problems that Clojure is trying to solve. It is obvious how to prevent this problem. Just expose following methods: synchronized void setNewDate(Date d); synchronized void setNewDate(int year, int month, int day); That's it. No magic needed. |
Posts: 2 / Nickname: roblally / Registered: December 16, 2007 8:24 AM
Re: Time is the New Memory
September 22, 2009 5:13 AM
|
> It is obvious how to prevent this problem. Just expose following methods:
> synchronized void setNewDate(Date d); > synchronized void setNewDate(int year, int month, int day); > That's it. No magic needed. That doesn't solve anything, since you need to expose a getDate method and, although you can synchronise this method too it doesn't provide the atomic check and set behaviour that's needed in this case. You are right though, no magic is needed. Relational databases solved this problem decades ago by using transactions. Which is exactly what Clojure does... |
Posts: 1 / Nickname: richhickey / Registered: December 15, 2007 2:54 AM
Re: Time is the New Memory
September 22, 2009 5:22 AM
|
> > won't make this code safe:
> > if(date.getDay().equals(3)) date.setDay(4) > > ... because the synchronisation of the date object > > doesn't ensure that another thread can't change it > > between the get and the set method. This is one of > > the problems that Clojure is trying to solve. > > It is obvious how to prevent this problem. Just expose > following methods: > synchronized void setNewDate(Date d); > synchronized void setNewDate(int year, int month, > > That's it. No magic needed. Imagine a class with mutable members A, B, C, and D. Would you have: synchronized void setA(Object a); synchronized void setB(Object b); synchronized void setC(Object c); synchronized void setD(Object d); synchronized void setAB(Object a, Object b); synchronized void setAC(Object a, Object c); synchronized void setAD(Object a, Object d); synchronized void setBC(Object b, Object c); ... ? Providing a synchronized method for every composite operation simply doesn't scale. And adate.setNewDate(anotherDate); makes as much sense as: 42.setNewValue(43); were that possible. No one is advocating any 'magic'. Just programming with values, and functions that create new values. By comparison it is mutable objects that are magical. Rich |
Posts: 26 / Nickname: cpurdy / Registered: December 23, 2004 0:16 AM
Re: Time is the New Memory
September 22, 2009 1:30 PM
|
> if you make every public method in class date synchronized
> + if you allow access to attribute only via methods, than > I strongly assume that there cannot occur any concurrency > problems. Nope. Concurrency challenges represent a larger set than the types of data corruption and visibility issues that synchronization addresses. Peace, Cameron Purdy | Oracle Coherence |
Posts: 1 / Nickname: vinigodoy / Registered: September 22, 2009 1:55 AM
Re: Time is the New Memory
September 22, 2009 7:05 AM
|
> if you make every public method in class date synchronized
> + if you allow access to attribute only via methods, than > I strongly assume that there cannot occur any concurrency > problems. But that's not true. You you'll end up with two mutually exclusive methods, but there's no guarantee of synchronization between two calls. For example, if Date class has all it's methods synchronized it's still possible that a thread read the date in between of line 1 and 2: 1. date.setMonth(february); 2. date.setDay(10); And that's a concurrency problem, even 1 and 2 being synchronized. You should place the monitor in the class that manipulates the entire date object. And, every time this manipulation occur, it should be using this same monitor. And that's very hard to achieve, if you are sharing this object between lots of classes. For this situation, the better whould be making this classe immutable (as suggested) or creating a setTime() method, that forces the programmer to chance all or nothing. That solves the problem of threating the class in pieces, also described in the article. Despite this fact, the "non concurrency" sample of the article is still a concurrency problem. Since there's resource sharing amoung different pieces of the code, it's still concurrency, no matter if the memory is a volatile or a phisical one. Of course, the concurrency definition is intrisincally related to time. |
Posts: 2 / Nickname: dserodio / Registered: April 25, 2006 4:06 AM
Re: Time is the New Memory
September 22, 2009 7:17 PM
|
Pardon my ignorance, but wouldn't this work?
synchronized(date) { |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 22, 2009 7:56 PM
|
> Pardon my ignorance, but wouldn't this work?
> synchronized(date) { Yes, but in a very circumscribed context. That doesn't work for that other chunk of code that already has a reference to the date object and has now just had its "value" swapped out from underneath it. It doesn't work if there's any other code that mutates the object without locking on it. Manual locking, even in the simplest cases, is asking for trouble. - Chas |
Posts: 7 / Nickname: skrisz / Registered: March 16, 2009 2:31 AM
Re: Time is the New Memory
September 23, 2009 3:14 AM
|
> That doesn't
> work for that other chunk of code that already has a > reference to the date object and has now just had its > "value" swapped out from underneath it. Yes, but this is what software design and planning are about. Clearly the "other chunk of code" was not designed to support this scenario. |
Posts: 18 / Nickname: cgross / Registered: October 16, 2006 3:21 AM
Re: Time is the New Memory
September 22, 2009 10:19 AM
|
Until I see an incremental proposal that doesn't force everyone to change their day-to-day coding style, I'm not convinced. It needs to be something even simpler than synchronized, for example.
Garbage collection succeeded because memory management was hard and, when GC came along, we got to keep programming in the same old way that made sense to us. It was almost wholly subtractive: we just had to do less stuff. Lazy, stupid developers got more productive, since they weren't managing memory well anyway. Academics and language guys are usually shooting for some platonic ideal and, in the mean time, practical, unsexy tools (web servers splitting requests across cores, java.util.concurrent.* when you have a specific use case) will continue to be where progress is made. Just look at the historical evidence. And 20 years from now, someone advocating functional programming will be giving a talk that starts "Are We There Yet?" Cheers, Carson |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 22, 2009 10:39 AM
|
> Academics and language guys are usually shooting for some
> platonic ideal and, in the mean time, practical, unsexy > tools (web servers splitting requests across cores, > java.util.concurrent.* when you have a specific use case) > will continue to be where progress is made. Just look at > the historical evidence. Don't look now, but all of clojure's concurrency primitives are built on top of java.util.concurrent -- which is an excellent library, but doesn't really provide an API that "lazy, stupid developers" can use. Besides that, if those folks can use JDBC (or whatever) with its transaction isolation levels and such, then surely they can use CAS (atoms) and STM, which are dead-brains simple in comparison. - Chas |
Posts: 18 / Nickname: cgross / Registered: October 16, 2006 3:21 AM
Re: Time is the New Memory
September 22, 2009 7:27 PM
|
> Don't look now, but all of clojure's concurrency
> primitives are built on top of java.util.concurrent -- > which is an excellent library, but doesn't really provide > an API that "lazy, stupid developers" can use. > > Besides that, if those folks can use JDBC (or whatever) > with its transaction isolation levels and such, then > surely they can use CAS (atoms) and STM, which are > dead-brains simple in comparison. > > - Chas I don't disagree. But JDBC doesn't ask you to change your day-to-day coding style *except* when you are working with the database. That's the great thing about it: whatever side computations you do are in the plain-old imperative style you know and love, and then you end up wrapping it all up and tying a transactional bow on top when you are done. The current state of affairs is this: most of the time I can be stupid. When something concurrent comes up, I have to be smart for a bit to get something out of java.util.concurrent.* set up, and then I can forget about it. The proposed solution is that I change the way I code everywhere to make the concurrent case easier to deal with for the library, language and hardware developers. But I'm already doing pretty well over here in Imperative-stan, with the web server and database doing a lot of the concurrency heavy lifting for me. I don't want or need to change anything to get my web app running pretty well. Or at least well enough. So why should I adopt an awkward programming methodology across the board when I'm doing pretty well, day to day, without it? Compare with the proposed analogy, the advent of GC: pre-GC I had to be pretty smart all the time. Then GC came along and, to a reasonable approximation, I could be pretty stupid all the time. (Or apply my admittedly limited brain power to problems other than memory management.) I wrote the same code I always did, but just left off the memory management bits. That's dramatically different than proposing I change my entire programming style. GC was an unqualified, sand-pounding, sing-it-from-the-hilltops win with a negative cost: they *paid* me not to write code! Glorious. Until I see a concurrency proposal that is that big of a win with very little cost, I'll remain skeptical of wide adoption. STM might be it, but it's going to have to be dead simple (from the users perspective) STM that melts into the background for most developers. And I remain skeptical that this is, finally, the moment that functional programming has been waiting for. As always, worse is better, Carson |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 22, 2009 8:14 PM
|
> That's dramatically different than proposing I change my
> entire programming style. GC was an unqualified, > sand-pounding, sing-it-from-the-hilltops win with a > negative cost: they *paid* me not to write code! > Glorious. > > Until I see a concurrency proposal that is that big of a > win with very little cost, I'll remain skeptical of wide > adoption. STM might be it, but it's going to have to be > dead simple (from the users perspective) STM that melts > into the background for most developers. And I remain > skeptical that this is, finally, the moment that > functional programming has been waiting for. > > As always, worse is better, > Carson Yeah, I can see that perspective. I'd respond with: - Maybe there's a genius moment to be had that will make this entire discussion moot, but it isn't here, and I'm not aware of even a wisp of promise on the horizon that it's coming. There's work to be done now, and I'm not hacking and slashing through Java for another 10 years given a (very pleasant, IMO) alternative. - This isn't about concurrency exclusively -- note my comment above about manual locking being a broken strategy even in a single-threaded program (due to simple shared references to mutable objects). We don't do much w.r.t. concurrency at all, but having an efficient, pleasant FP environment on the JVM that comes along with great libraries and a great community (talking about clojure here, just in case that wasn't clear) has made me a convert w.r.t. FP, persistent data structures, etc. - I'm in the happy circumstance of not caring a whit about what's widely adopted or not. I just want a development environment that allows us to do our work in the most efficient, effective way possible; some critical mass within a community is necessary, but wide adoption isn't for my purposes. It's probably wishing for more engagement than I suspect is realistic, but I wish more people would take the same tack -- if they did, I'll bet we'd converge on a set of ideal solutions way faster than otherwise. - Chas |
Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
Re: Time is the New Memory
September 22, 2009 10:14 PM
|
Hi Carson,
> The current state of affairs is this: most of the time I > can be stupid. When something concurrent comes up, I have > to be smart for a bit to get something out of > java.util.concurrent.* set up, and then I can forget about > it. > > The proposed solution is that I change the way I code > everywhere to make the concurrent case easier to deal with > for the library, language and hardware developers. > I think the proposal (by which I mean Rich's idea of language support for auto-time management) is intended to make the case easier to deal with for you, not for them--primarily the concurrent case but also the sequential one. > But I'm already doing pretty well over here in > Imperative-stan, with the web server and database doing a > lot of the concurrency heavy lifting for me. I don't want > or need to change anything to get my web app running > pretty well. Or at least well enough. So why should I > adopt an awkward programming methodology across the board > when I'm doing pretty well, day to day, without it? > If you're not feeling concurrent pain and aren't overwhelmed by accidental complexity, then you indeed have no need to change anything. But it doesn't sound like you're really doing much concurrent programming, where you're trying to do concurrent things directly. If the library is doing it for you, then that's a good thing. A big design goal for J2EE was to take care of threads so business programmers could focus on writing *sequential* business logic. But not all apps are like that. I am just polishing off the 1.0 release of ScalaTest, and it is a concurrent app. I use one Scala actor, a bunch of stuff from java.util.concurrent, one class that's synchronized, and one that uses wait/notify (and is synchronized too of course). The rest is sequential: a mix of imperative and functional style, but leaning primarily towards the functional style because I felt it made my code better. I'm also not sure functional programming is as much awkward as it is unfamiliar to the mainstream. I.e., it is awkward to people because they are unfamiliar with it. I find it awkward sometimes, especially when I would need to use recursion to be full-on functional. I do use recursion sometimes, but since I'm working in Scala, I can also just use a while loop and a var, and I do that from time to time. I make sure it is all local, so each thread would get its own copy, then it's perfectly fine to do it the way its done in Imperative-stan. If you forget the word "functional" and just say final variables, immutable objects, and methods that just return a value based on just the passed data without having any side effects, that really doesn't sound that unfamiliar or awkward. String is an immutable object. Is java.lang.String awkward for Java programmers to use? How about String.valueOf(5)? That's a pure function. I just operates on its input, returns a value, has no side effects. I doubt that's awkward for Java programmers, and neither are final variables. What is awkward, probably however, is using recursion instead of looping. Recursion isn't that hard to understand and I figure most Java programmers know what it means and how to do it, but it's not the way we usually have done things in Imperative-stan, so we're not used to it. > Compare with the proposed analogy, the advent of GC: > pre-GC I had to be pretty smart all the time. Then GC > came along and, to a reasonable approximation, I could be > pretty stupid all the time. (Or apply my admittedly > limited brain power to problems other than memory > management.) I wrote the same code I always did, but just > left off the memory management bits. > > That's dramatically different than proposing I change my > entire programming style. GC was an unqualified, > sand-pounding, sing-it-from-the-hilltops win with a > negative cost: they *paid* me not to write code! > Glorious. > > Until I see a concurrency proposal that is that big of a > win with very little cost, I'll remain skeptical of wide > adoption. STM might be it, but it's going to have to be > dead simple (from the users perspective) STM that melts > into the background for most developers. And I remain > skeptical that this is, finally, the moment that > functional programming has been waiting for. > My sense is that there isn't one answer to concurrency, that we'll need a good toolbox from which to choose. Java provides locks and monitors in the language, and java.util.concurrent offers quite a few options in the standard library. Scala adds in actors and will add more things in the future. Clojure also provides several different options, CAS, STM, Agents. Plain old immutable objects and pure functions are helpful too. But I don't see any silver bullets. > As always, worse is better, > |
Posts: 98 / Nickname: achilleas / Registered: February 3, 2005 2:57 AM
Re: Time is the New Memory
September 23, 2009 2:30 AM
|
> I'm also not sure functional programming is as much
> awkward as it is unfamiliar to the mainstream. I.e., it is > awkward to people because they are unfamiliar with it. I > find it awkward sometimes, especially when I would need to > use recursion to be full-on functional. I do use recursion > sometimes, but since I'm working in Scala, I can also just > use a while loop and a var, and I do that from time to > time. I make sure it is all local, so each thread would > get its own copy, then it's perfectly fine to do it the > way its done in Imperative-stan. > > If you forget the word "functional" and just say final > variables, immutable objects, and methods that just return > a value based on just the passed data without having any > side effects, that really doesn't sound that unfamiliar or > awkward. String is an immutable object. Is > java.lang.String awkward for Java programmers to use? How > about String.valueOf(5)? That's a pure function. I just > operates on its input, returns a value, has no side > effects. I doubt that's awkward for Java programmers, and > neither are final variables. What is awkward, probably > however, is using recursion instead of looping. Recursion > isn't that hard to understand and I figure most Java > programmers know what it means and how to do it, but it's > not the way we usually have done things in > Imperative-stan, so we're not used to it. That's good for simple things like a string, but if more ambitious things are tried, then FP gets very complex quickly. Just witness how difficult it is to manipulate trees (search for Huet's zipper pattern), for example...I seriously doubt that the majority of programmers will ever understand such things as the zipper. And then there are other complex stuff...for example, in trying to program a game in Haskell, one has to use something like Haskell/Arrows and be familiar with concatenative programming...pure FP certainly isn't for the common folks. |
Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
Re: Time is the New Memory
September 23, 2009 11:14 AM
|
> > I'm also not sure functional programming is as much
> > awkward as it is unfamiliar to the mainstream. I.e., it > is > > awkward to people because they are unfamiliar with it. > I > > find it awkward sometimes, especially when I would need > to > > use recursion to be full-on functional. I do use > recursion > > sometimes, but since I'm working in Scala, I can also > just > > use a while loop and a var, and I do that from time to > > time. I make sure it is all local, so each thread would > > get its own copy, then it's perfectly fine to do it the > > way its done in Imperative-stan. > > > > If you forget the word "functional" and just say final > > variables, immutable objects, and methods that just > return > > a value based on just the passed data without having > any > > side effects, that really doesn't sound that unfamiliar > or > > awkward. String is an immutable object. Is > > java.lang.String awkward for Java programmers to use? > How > > about String.valueOf(5)? That's a pure function. I just > > operates on its input, returns a value, has no side > > effects. I doubt that's awkward for Java programmers, > and > > neither are final variables. What is awkward, probably > > however, is using recursion instead of looping. > Recursion > > isn't that hard to understand and I figure most Java > > programmers know what it means and how to do it, but > it's > > not the way we usually have done things in > > Imperative-stan, so we're not used to it. > > That's good for simple things like a string, but if more > ambitious things are tried, then FP gets very complex > quickly. Just witness how difficult it is to manipulate > trees (search for Huet's zipper pattern), for example...I > seriously doubt that the majority of programmers will ever > understand such things as the zipper. And then there are > other complex stuff...for example, in trying to program a > game in Haskell, one has to use something like > Haskell/Arrows and be familiar with concatenative > programming...pure FP certainly isn't for the common folks. > That may be true. I don't know. I've never heard of "the zipper," and don't know Haskell. I'm not sure pure FP is what anyone was talking about here. Is Clojure trying to be pure FP? I don't know Clojure much yet, but it looks from the outside like it offers ways to do mutation. It just tries to make it more explicit. Scala for sure isn't pure FP. Behind an actor, for example, you can do things just as imperative as you want so long as you're sure not to share that mutable data with other actors or threads. In fact you can write Scala in the same way you write Java if you want. What I find has made my code better is not going pure FP, which I've never done, but turning the knob more towards the functional style. I haven't experienced that the functional style added complexity, possibly because I've been doing a hybrid functional/imperative style. Actually I was always doing a hybrid. It's just prior to my exposure to Scala I wrote more imperative and less functional. Now it is the other way around. |
Posts: 2 / Nickname: tolsen / Registered: March 21, 2007 2:54 AM
Re: Time is the New Memory
September 23, 2009 0:11 PM
|
Well, the other nice thing about Scala is that you get to argue/defend with both sides.
On topic: There's a Clojure lecture about concurrence on http://blip.tv/file/812787 |
Posts: 9 / Nickname: cemerick / Registered: June 3, 2004 8:40 AM
Re: Time is the New Memory
September 24, 2009 2:31 AM
|
> > That's good for simple things like a string, but if
> more > > ambitious things are tried, then FP gets very complex > > quickly. Just witness how difficult it is to manipulate > > trees (search for Huet's zipper pattern), for > example...I > > seriously doubt that the majority of programmers will > ever > > understand such things as the zipper. And then there > are > > other complex stuff...for example, in trying to program > a > > game in Haskell, one has to use something like > > Haskell/Arrows and be familiar with concatenative > > programming...pure FP certainly isn't for the common > folks. > > > That may be true. I don't know. I've never heard of "the > zipper," and don't know Haskell. I'm not sure pure FP is > what anyone was talking about here. Is Clojure trying to > be pure FP? I don't know Clojure much yet, but it looks > from the outside like it offers ways to do mutation. It > just tries to make it more explicit. Scala for sure isn't > pure FP. Behind an actor, for example, you can do things > just as imperative as you want so long as you're sure not > to share that mutable data with other actors or threads. > In fact you can write Scala in the same way you write Java > if you want. Having coded more trees by hand in Java and C and C++ than I care to admit, I'll say that Huet zippers are far simpler and easier to use than traditional approaches. But anyway, no, clojure is not aiming to be pure FP -- it has top-notch Java interop features, and so therefore is not pure FP in a definitional way, regardless of other design decisions. As for it 'offering ways to do mutation', the point is that it offers a framework in which the semantics of change are known, and simpler than anything-goes imperative mutation. Of course, if you don't want to buy into that, go ahead and fiddle with any Java libraries you want (this becomes necessary when doing work in Swing, for example). - Chas |
Posts: 20 / Nickname: raoulduke / Registered: April 14, 2006 11:48 AM
Re: Time is the New Memory
October 2, 2009 2:25 PM
|
> Compare with the proposed analogy, the advent of GC:
> pre-GC I had to be pretty smart all the time. Then GC > came along and, to a reasonable approximation, I could be > pretty stupid all the time. (Or apply my admittedly > limited brain power to problems other than memory > management.) I wrote the same code I always did, but just > left off the memory management bits. only if you were a really good, careful, programmer in the first place. like, if you are the kind of person who draws little maps on grid paper when playing adventure games. i love gc. really. but it is such a small part of the overall story of memory, let alone resources in general, management! ask any maintenance programmer who has had to try to fix the bloody memory leaks in a java/c#/lisp/python/whatever-gcd-language-you-like system. > Until I see a concurrency proposal that is that big of a > win with very little cost, I'll remain skeptical of wide > adoption. STM might be it, but it's going to have to be > dead simple (from the users perspective) STM that melts > into the background for most developers. And I remain > skeptical that this is, finally, the moment that > functional programming has been waiting for. here's the problem: semantics. at the moment, we do not have an AI that can say what parts of code need to be in the same transaction (this is the "external" race problem i said before). therefore, we cannot have a miraculous approach to concurrency where we omit more code (per how you describe the wins of GC). sincerely. |