One problem I've been considering is triangle surface meshes. The data is variable in size, with no defined start or end point, where points distant on the surface may share a high amount of mutual information (through symmetry, etc).
One approach I've thought about is applying kernel methods. You can compose kernels, so they scale up cleanly regardless of variations in the input dimension. The sum or product of kernels between each node in the input graph and some basis set is itself a kernel. If your kernels describe covariance between observations (i.e. Gaussian processes) then additional input dimensions have a constraining effect, rather than causing evidence inflation for larger inputs as a typical neural network might.
One approach I've thought about is applying kernel methods. You can compose kernels, so they scale up cleanly regardless of variations in the input dimension. The sum or product of kernels between each node in the input graph and some basis set is itself a kernel. If your kernels describe covariance between observations (i.e. Gaussian processes) then additional input dimensions have a constraining effect, rather than causing evidence inflation for larger inputs as a typical neural network might.