c# - Why shouldn't all functions be async by default? -
the async-await pattern of .net 4.5 paradigm changing. it's true.
i've been porting io-heavy code async-await because blocking thing of past.
quite few people comparing async-await zombie infestation , found rather accurate. async code likes other async code (you need async function in order await on async function). more , more functions become async , keeps growing in codebase.
changing functions async repetitive , unimaginative work. throw async
keyword in declaration, wrap return value task<>
, you're pretty done. it's rather unsettling how easy whole process is, , pretty text-replacing script automate of "porting" me.
and question.. if code turning async, why not make async default?
the obvious reason assume performance. async-await has overhead , code doesn't need async, preferably shouldn't. if performance sole problem, surely clever optimizations can remove overhead automatically when it's not needed. i've read "fast path" optimization, , seems me alone should take care of of it.
maybe comparable paradigm shift brought on garbage collectors. in gc days, freeing own memory more efficient. masses still chose automatic collection in favor of safer, simpler code might less efficient (and arguably isn't true anymore). maybe should case here? why shouldn't functions async?
first off, thank kind words. indeed awesome feature , glad have been small part of it.
if code turning async, why not make async default?
well, you're exaggerating; all code isn't turning async. when add 2 "plain" integers together, you're not awaiting result. when add 2 future integers third future integer -- because that's task<int>
is, it's integer you're going access in future -- of course you'll awaiting result.
the primary reason not make async because the purpose of async/await make easier write code in world many high latency operations. vast majority of operations not high latency, doesn't make sense take performance hit mitigates latency. rather, key few of operations high latency, , operations causing zombie infestation of async throughout code.
if performance sole problem, surely clever optimizations can remove overhead automatically when it's not needed.
in theory, theory , practice similar. in practice, never are.
let me give 3 points against sort of transformation followed optimization pass.
first point again is: async in c#/vb/f# limited form of continuation passing. enormous amount of research in functional language community has gone figuring out ways identify how optimize code makes heavy use of continuation passing style. compiler team have solve similar problems in world "async" default , non-async methods had identified , de-async-ified. c# team not interested in taking on open research problems, that's big points against right there.
a second point against c# not have level of "referential transparency" makes these sorts of optimizations more tractable. "referential transparency" mean property the value of expression not depend on when evaluated. expressions 2 + 2
referentially transparent; can evaluation @ compile time if want, or defer until runtime , same answer. expression x+y
can't moved around in time because x , y might changing on time.
async makes harder reason when side effect happen. before async, if said:
m(); n();
and m()
void m() { q(); r(); }
, , n()
void n() { s(); t(); }
, , r
, s
produce side effects, know r's side effect happens before s's side effect. if have async void m() { await q(); r(); }
goes out window. have no guarantee whether r()
going happen before or after s()
(unless of course m()
awaited; of course task
need not awaited until after n()
.)
now imagine property of no longer knowing order side effects happen in applies every piece of code in program except optimizer manages de-async-ify. have no clue anymore expressions evaluate in order, means expressions need referentially transparent, hard in language c#.
a third point against have ask "why async special?" if you're going argue every operation should task<t>
need able answer question "why not lazy<t>
?" or "why not nullable<t>
?" or "why not ienumerable<t>
?" because that. why shouldn't case every operation lifted nullable? or every operation lazily computed , result cached later, or the result of every operation sequence of values instead of single value. have try optimize situations know "oh, must never null, can generate better code", , on. (and in fact c# compiler lifted arithmetic.)
point being: it's not clear me task<t>
special warrant work.
if these sorts of things interest recommend investigate functional languages haskell, have stronger referential transparency , permit kinds of out-of-order evaluation , automatic caching. haskell has stronger support in type system sorts of "monadic liftings" i've alluded to.
Comments
Post a Comment