Limit distributions for non differentiable maps. The invariance principle states that the empirical distribution function (e.d.f.) when properly normalized and centered around the distribution function (d.f.), converges weakly to some universal limit process. The result is applicable to i.i.d. data as well stationary data (weakly or strongly dependent). In many estimation problems the estimand can be seen as a function of the d.f.. Plugging in the e.d.f. gives an estimator of the estimand. Assuming smoothness for the function, such as functional differentiability, implies that the resulting estimator converges weakly. This is the topic for so called regular functional estimation, in which case (for i.i.d. data) typically the limit process is Gaussian and the rate of convergence is the usual parametric square-root of n. In contrast, when the estimand is the density function, the map is not smooth. Assuming some form of regularity on the estimand, such as monotonicity, still implies limit distribution results, however with slower rates and sometimes limit vectors that are not Gaussian; this is non regular estimation. We present a general approach to non regular estimation, demanding continuity, invariance and locality of the map defining the estimand and demanding weak convergence of a smooth version of the rescaled empirical process. This provides a general framework for previous results on limit distributions for the isotonic regression map (by Anevski and Hössjer) and for the Hardy-Littlewood-Polya map (by Anevski and Fougeres), and could be seen as an analogue to the functional differentiability assumption in regular functional estimation.