Optimizing Manual QDA Implementation in Python: Numerical Stability and Performance Concerns

  Kiến thức lập trình

I’m working on implementing Quadratic Discriminant Analysis (QDA) manually in Python for image classification, dealing with high-dimensional data. To ensure numerical stability in calculations involving covariance matrices, inverses, and determinants, I converted my dataset from int8 to float64:

X_train = train['X'].transpose(2, 0, 1).reshape(-1, 28*28).astype(np.float64)

Given that float64 doubles the memory usage compared to float32, I’m concerned about the impact on performance, especially with large datasets. My questions are:

  1. Is converting to float64 the best practice for ensuring numerical stability in such computations, or are there less resource-intensive alternatives?
  2. Could using float32 significantly increase the risk of numerical errors in QDA, where covariance matrix computations are critical?
  3. Are there specific scenarios or conditions under which float32 might suffice without compromising the accuracy and stability of the QDA model?

The dataset involves thousands of images, each reshaped into a 1D array of 784 features (28×28 pixels). The manual QDA computation of the covariance matrix for each class has raised concerns about numerical precision.

Moreover, I’ve encountered a significant performance issue with my custom qda_predict function, which uses scipy.stats.multivariate_normal.logpdf for computing the likelihoods. Despite reducing my dataset to a subset of 100 samples, the function has an excessively long or seemingly endless runtime, making it impractical for larger datasets:

def qda_predict(X, params):
    mu = params['mu']
    cov = params['cov']
    pi = params['pi']
    y_pred = []
    for x in X:
        scores = []
        for c in range(len(mu)):
            score = multivariate_normal.logpdf(x, mean=mu[c], cov=cov[c], allow_singular=True) + np.log(pi[c])
            scores.append(score)
        y_pred.append(np.argmax(scores))
    return np.array(y_pred)

When using the built-in QuadraticDiscriminantAnalysis function from sklearn, the performance and runtime improve significantly, fitting and predicting on the same subset in about 20 seconds. This stark difference suggests potential inefficiencies in my implementation.

I’m looking for insights or recommendations on managing numerical stability versus performance trade-offs in this scenario, as well as specific optimizations to reduce runtime without compromising model accuracy. Any advice on how sklearn achieves its efficiency and whether those strategies could be applied to my manual implementation would be greatly appreciated.

LEAVE A COMMENT