- Consider the complexity of the models. A smaller model that explains the data just as well as a larger model is generally preferred, as it is simpler and easier to interpret.
- Look at other goodness-of-fit metrics, such as the ‘adjusted R-squared', 'AIC (Akaike Information Criterion)’, or ‘BIC (Bayesian Information Criterion)'. These metrics penalize more complex models, so a smaller model may perform better. The 'F-test' can also be used to test whether the larger model (v2) is significantly better than the smaller model (v1).
- Conduct ‘cross-validation’ or use a hold-out dataset to test the performance of the models on new data. The model with better out-of-sample performance is generally preferred.
How to compare two nested models when they have very small R_squared diffrence.
3 views (last 30 days)
Show older comments
I have two models, ie v1 = a1 + a2*f + a3*f2 and v2 = k( a1 + a2*f a3*f^2)
0 Comments
Answers (1)
Rohit
on 20 Mar 2023
When comparing two nested models with very small differences in R-squared, it is important to consider other metrics and factors to determine which model is better.
Here are some suggestions:
0 Comments
See Also
Categories
Find more on Linear and Nonlinear Regression in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!