We propose a novel approach to extract audio features based on evolutionary approximation of instrumental texture in polyphonic audio recordings. A population of mixtures of samples from 51 instruments with 165 individual instrument bodies or playing styles is evolved with the help of musically meaningful genetic operators to produce chords which are as similar as possible to unknown signals. Our algorithm allows for a simultaneous approximation of all onsets/chords from a given audio track. The fitness function is designed to retain mixtures which are not directly comparable because they approximate different segments of a track like intro or verse. Another advantage is that no labelled signals are required to learn supervised models for instrument prediction, and the sample database can be easily extended with further instruments. Although the multi-label classification performance of instrument recognition still has room to be improved, the derived instrumental and pitch statistics are comparable to the best selected semantic features from a large set of 566 descriptors including not only instrument and pitch statistics, but also chord, harmony, structure, temporal, dynamics, emotional, vocal, and further characteristics, even outperforming them for a half of tested music categories.