Implementing a Neural Network with a Custom Objective Function
I want to build a neural network that takes a matrix A as input and outputs a matrix B such that a constant C=f(A,B)is maximized as much as possible.(The function f()is a custom complex computation function involving random values,probability density,matrix norms,and a series of other calculations).
I tried to directly use 1/f(A,B)or-f(A,B)as the loss function,but I encountered an error stating:"The value to be differentiated is not tracked.It must be a tracked real number dlarray scalar.Use dlgradient to track variables in the function called by dlfeval."I suspect this is likely because f(A,B)is not differentiable.
However,I've also seen people say that no matter what function it is,the dlgradient function can differentiate it.
So,I'm not sure whether it's because the function f()is too complex to be used as a loss function to calculate gradients,or if there's an issue with my code.
If I can't directly use its reciprocal or negative as the loss function,how should I go about training this neural network?Currently,I only know how to implement:providing target values and using functions like mse or huber as loss functions.
1 Comment
Answers (1)
- Convert the variables that you want to differentiate with respect to into dlarray objects.
- Create a dedicated myCustomLoss function that returns the gradient of the quantity that you want to minimize.
- Call the myCustomLoss function using dlfeval.
- Use a solver update function such as adamupdate which takes as input the learnable gradients and updates your learnable parameters.
0 Comments
See Also
Categories
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!