Neural networks avoid the curse of dimensionality in the sense that the approximation error becomes independent from the dimension of the input space (under certain conditions), which is not the case e.g. for polynomial expansions.
Consider \(y=f(x_1, x_2, x_3)\). A polynomial expansion with terms up to degree 7 would contain many many terms. This means that for a given number of training data, the number of parameters to be estimated is huge.