Re: [RFC] Timeouts on HTTP requests

From: Junio C Hamano <junkio@cox.net>
Date: 2005-10-19 16:02:14
Nick Hengeveld <nickh@reactrix.com> writes:

> Our QA department today checked what would happen if the network connection
> went away completely in the middle of an HTTP transfer.  It looks as though
> the answer is that git-http-fetch sits there forever waiting for CURL to
> return something.

Ouch.

> I'm thinking of taking advantage of CURL's capability of aborting a request
> if the transfer rate drops below a threshold for a specified length of time
> using a new pair of environment variables and/or config file settings:
>
> GIT_HTTP_LOW_SPEED_LIMIT/http.lowspeedlimit
> GIT_HTTP_LOW_SPEED_TIME/http.lowspeedtime
>
> Does this make sense, and if so should there be defaults if nothing is
> specified?

I suspect these would be quite different between DSL and
localnet, so I doubt if there is a reasonable default value to
quick give-up.

On the other hand, having _no_ activity for say 30 seconds would
indicate a dead link on either modem or localnet.

BTW, I've been thinking about giving defaults by shipping
templates/config (i.e. no compile-time defaults).  One trick I
found cute is to have "clone.keeppack = 1" in the templates to
be applied for any newly built repository, especially now
kernel.org has git-daemon enabled.

-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Received on Wed Oct 19 16:04:42 2005

This archive was generated by hypermail 2.1.8 : 2005-10-19 16:04:45 EST