Filters in a Convolutional Neural Network (CNN) contain model parameters learned from data. The properties of convolutional filters in a trained deep network directly affect the quality of the feature representation being learned. In this talk, we introduce a framework for decomposing convolutional filters over a truncated expansion under pre-fixed bases, where the expansion coefficients are adaptive. Such a structure not only reduces the number of trainable parameters and computational load but also imposes filter regularity by bases truncation. Apart from maintaining prediction accuracy across image classification datasets, the decomposed-filter CNN also produces a stable representation with respect to input variations proved under generic assumptions. The framework extends to group-equivariant CNNs where it significantly reduces the model complexity and demonstrates improved stability of the trained network. Joint work with Qiang Qiu, Robert Calderbank, and Guillermo Sapiro.