objective c - iOS create the inverse of a CAShapeLayer mask -
so fresh ios developer working on ios drawing application plays user sketches using x/y coordinates , timestamps. draw simple pen events have uiview serves drawing view , draw view using cashapelayers so:
if (index == 0){ cgpathmovetopoint(path, null, point.x, point.y); } else { cgpathaddlinetopoint(path, null, point.x, point.y); } …… cashapelayer *permlayer = [cashapelayer layer]; [permlayer setframe:cgrectmake(0, 0, 1024, 640)]; [permlayer setfillcolor:[[uicolor clearcolor] cgcolor]]; [permlayer setstrokecolor:currentcolor.cgcolor]; [permlayer setlinewidth:[event[@"strokeweight"] floatvalue]]; [permlayer setlinejoin:kcalinejoinround]; [permlayer setlinecap:kcalinecapround]; …… [[self layer] addsublayer:permlayer]; ……
all , looks fine:
now want have erase event looks like:
i tried accomplish using cashapelayer mask , drawing cgpath of erase event mask. ended this:
….which exact opposite of want. guess need somehow inverse mask of cashapelayer created cgpath of erase event? i’m not sure proper way approach without lot of math i’m not familiar with.
the other caveat animation tied cadisplaylink these drawing animations being refreshed , performance important. i’ve tried before drawing uiimages instead of cashapelayers , using kcgblendmodeclear achieve erase effect, performance suffered terribly. advice or insights!
i guess need somehow inverse mask of cashapelayer created cgpath of erase event?
the "inverse mask" ask not can described cashapelayer
(at least there's no straightforward solution). if want stay cashapelayer
, add "erase path" background color of image.
however, if understand implementation correctly, see problem: add sublayer each stroke user does. if user draws complex pictures many strokes, end huge stack of layers sure lead performance problems @ point.
i suggest switch approach , draw image buffer. sounds tried dismissed approach due performance reasons - if done correctly, performance pretty good.
create own bitmap context with
_context = cgbitmapcontextcreate(...);
and in touchesbegan/moved methods draw context this:
if(iseraseevent) cgcontextsetblendmode(_context, kcgblendmodeclear); else cgcontextsetblendmode(_context, kcgblendmodenormal); // configure current stroke: linewidth, color etc. cgcontextsetlinewidth(_context, _currentstroke.linewidth); // added stroke point p point q cgcontextmovetopoint(_context, p.x, p.y); cgcontextaddlinetopoint(_context, q.x, q.y); cgcontextstrokepath(_context);
after each such event, display context. render whole context uiimage , display uiimage, performance might suffer way. instead, let rendering done calayer delegate. have delegate implement method
- (void)drawlayer:(calayer *)layer incontext:(cgcontextref)ctx { cgimageref tempimage = cgbitmapcontextcreateimage(_context); cgcontextdrawimage(ctx, self.bounds, tempimage); cgimagerelease(tempimage); }
don't forget tell layer content changed whenever draw context. calling setneedsdisplay
redisplays complete layer, of times way more need. instead:
cgrect r = cgrectmake(min(p.x, q.x), min(p.y, q.y), abs(p.x-q.x), abs(p.y-q.y)); cgfloat inset = ...; [_thelayer setneedsdisplayinrect:cgrectinset(r, inset, inset)];
to ensure rerender part of image content changed. inset
should (negative) inset take stroke width account.
note stuff i've written above (except context creation) called once every touchesmoved
event, i.e. strokes rendered , displayed @ moment user inputs them (or, in case, drawing process replayed) , never need render or display more that.
Comments
Post a Comment