Adaptive moment estimation (Adam) is an optimization algorithm used for gradient-based optimization of objective functions, particularly in deep learning. Adam combines the benefits of two other popular optimization algorithms, momentum and RMSprop, and adds some additional improvements.
Momentum is an optimization algorithm that adds a fraction of the previous gradient to the current gradient to smooth out the gradient descent path and accelerate convergence. RMSprop is an optimization algorithm that scales the learning rate for each parameter based on the magnitude of recent gradients for that parameter. Adam combines these two methods by using exponentially weighted moving averages of both the first and second moments of the gradients for each parameter. The first moment is the mean of the gradients, and the second moment is the variance of the gradients.
Adam also includes bias correction terms to correct for the fact that the first and second moments are initialized as zero vectors, which can lead to biased estimates early on in training. The bias correction terms are calculated using the decay rates beta1 and beta2 for the first and second moments, respectively.
One of the advantages of Adam over momentum and RMSprop is that it performs well on a wide range of problems and does not require extensive tuning of hyperparameters. Additionally, Adam is computationally efficient and can handle problems with large data sets and high-dimensional parameter spaces. Another advantage is that Adam can adaptively adjust its learning rate based on the magnitude of the gradients, making it more robust to noisy or sparse data.
Overall, Adam is a powerful optimization algorithm that combines the strengths of momentum and RMSprop while adding some additional improvements. It has become a popular choice for deep learning applications due to its efficiency, adaptability, and ease of use.
Cite As
Mehdi Ghasri (2024). Adaptive moment estimation (Adam) (https://www.mathworks.com/matlabcentral/fileexchange/136679-adaptive-moment-estimation-adam), MATLAB Central File Exchange.
Retrieved .
Hoang, NhatDuc, and VanDuc Tran. “Deep Neural Network Regression with Advanced Training Algorithms for Estimating the Compressive Strength of Manufactured-Sand Concrete.” Journal of Soft Computing in Civil Engineering, vol. 7, no. 1, Pouyan Press, Jan. 2023, doi:10.22115/scce.2022.349837.1485.
Hoang, NhatDuc, and VanDuc Tran. “Deep Neural Network Regression with Advanced Training Algorithms for Estimating the Compressive Strength of Manufactured-Sand Concrete.” Journal of Soft Computing in Civil Engineering, vol. 7, no. 1, Pouyan Press, Jan. 2023, doi:10.22115/scce.2022.349837.1485.
APA
Hoang, N. D., & Tran, V. D. (2023). Deep Neural Network Regression with Advanced Training Algorithms for Estimating the Compressive Strength of Manufactured-Sand Concrete. Journal of Soft Computing in Civil Engineering, 7(1). Pouyan Press. Retrieved from https://doi.org/10.22115/scce.2022.349837.1485
BibTeX
@article{Hoang_Tran_2023, place={IR}, title={Deep Neural Network Regression with Advanced Training Algorithms for Estimating the Compressive Strength of Manufactured-Sand Concrete}, volume={7}, url={https://doi.org/10.22115/scce.2022.349837.1485}, DOI={10.22115/scce.2022.349837.1485}, number={1}, journal={Journal of Soft Computing in Civil Engineering}, publisher={Pouyan Press}, author={Hoang, NhatDuc and Tran, VanDuc}, year={2023}, month={Jan} }
Mai, Hau T., et al. “A Novel Deep Unsupervised Learning-Based Framework for Optimization of Truss Structures.” Engineering with Computers, vol. 39, no. 4, Springer Science and Business Media LLC, Apr. 2022, pp. 2585–608, doi:10.1007/s00366-022-01636-3.
Mai, Hau T., et al. “A Novel Deep Unsupervised Learning-Based Framework for Optimization of Truss Structures.” Engineering with Computers, vol. 39, no. 4, Springer Science and Business Media LLC, Apr. 2022, pp. 2585–608, doi:10.1007/s00366-022-01636-3.
APA
Mai, H. T., Lieu, Q. X., Kang, J., & Lee, J. (2022). A novel deep unsupervised learning-based framework for optimization of truss structures. Engineering with Computers, 39(4), 2585–2608. Springer Science and Business Media LLC. Retrieved from https://doi.org/10.1007%2Fs00366-022-01636-3
BibTeX
@article{Mai_2022,
doi = {10.1007/s00366-022-01636-3},
url = {https://doi.org/10.1007%2Fs00366-022-01636-3},
year = 2022,
month = {apr},
publisher = {Springer Science and Business Media {LLC}},
volume = {39},
number = {4},
pages = {2585--2608},
author = {Hau T. Mai and Qui X. Lieu and Joowon Kang and Jaehong Lee},
title = {A novel deep unsupervised learning-based framework for optimization of truss structures},
journal = {Engineering with Computers}
}
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.