So now people are going back and forth about what the spec says.
Here's what seems to be the relevant portion of RFC 2616 (HTTP/1.1):
9.1 Safe and Idempotent Methods
9.1.1 Safe Methods
Implementors should be aware that the software represents the user in
their interactions over the Internet, and should be careful to allow
the user to be aware of any actions they might take which may have an
unexpected significance to themselves or others.
In particular, the convention has been established that the GET and
HEAD methods SHOULD NOT have the significance of taking an action
other than retrieval. These methods ought to be considered "safe".
This allows user agents to represent other methods, such as POST, PUT
and DELETE, in a special way, so that the user is made aware of the
fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects,
so therefore cannot be held accountable for them.
9.1.2 Idempotent Methods
Methods can also have the property of "idempotence" in that (aside
from error or expiration issues) the side-effects of N > 0 identical
requests is the same as for a single request. The methods GET, HEAD,
PUT and DELETE share this property. Also, the methods OPTIONS and
TRACE SHOULD NOT have side effects, and so are inherently idempotent.
However, it is possible that a sequence of several requests is non-
idempotent, even if all of the methods executed in that sequence are
idempotent. (A sequence is idempotent if a single execution of the
entire sequence always yields a result that is not changed by a
reexecution of all, or part, of that sequence.) For example, a
sequence is non-idempotent if its result depends on a value that is
later modified in the same sequence.
A sequence that never has side effects is idempotent, by definition
(provided that no concurrent operations are being executed on the
same set of resources).
Interestingly, a delete?id=10 link is idempotent. Assuming the id
is unique and not reused, the first GET will be the same as the second
or third. Idempotent says nothing about the server side or the
difference between N=0 and N=1.
Other links are interesting. Consider a link that changes the
order of a list. move?id=10&direction=up is not idempotent --
repeated calls change the response to move the item up more and more.
move?id=10&to_position=3 is idempotent. As a sequence it is not
idempotent, as it is effected by other links (which could change the
order of other items in the list).
move?pos=10:1&pos=13:2&pos=15:3 (i.e., item_id:position_index)
provides idempotent sequences.
But none of these are safe. But "idempotent" sounds cooler than
"safe" so it's being used a lot.
This paragraph is interesting as well:
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects,
so therefore cannot be held accountable for them.
In the case of the Google Web Accelerator, the user most certainly did
not request the side-effects. The question everyone is asking, is who
is responsible for the side effects, the original web application
developer, or Google? I say Google because they are breaking
expectations. Other people say the web developer, because that's what
the spec says. But frankly I don't see how the spec says that, but
it's blindingly obvious what convention says. The spec addresses
issues of caching, which is where idempotency comes into play, but has
little to do with this situation (though the GWA has been accused of
breaking that too
-- which I blame on IE for not implementing Vary properly and so
rendering a useful header conventionally useless).
Remember what's required to make the GWA work. It's not just the delete
links, though those are the most painful ones. A mail program that marks
mail read will be broken by the GWA. A logout link will be broken. A
vote-for-this link will be broken. And all the fixes involve Javascript;
either you make your site inaccessible to people without Javascript, or
GWA will break your site (since it acts just like a Javascript-disabled
client). Without nested forms you can't do what people
are saying, you can't turn everything unsafe into a POST.
To me, the GWA is a kind of loophole in the spec, not something the
spec allowed for. It seems like it makes sense, because it's doing
what bots have always done, trolling around for content. But it's
doing so pretending it's a user, and that's why it doesn't work with
the web we have. If they want a new spec about how to do that, okay.
Of course an HTTP/1.2 that clarifies this stuff is unlikely, but for
all the reasons that Google has uncovered here.