-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change activation function but return same accuracy value!!! #11
Comments
Hi, Disclaimer it's been a few year since I've looked at this code. It seems that the gradient for the activation is hardcoded within train.cpp. You might need to update their backpropagation code if you want to change the activation. Line 507 in cc13863
|
Thank you
Leo Ling ***@***.***>, 19 Ara 2023 Sal, 20:24 tarihinde şunu
yazdı:
… Hi,
Disclaimer it's been a few year since I've looked at this code. It seems
that the gradient for the activation is hardcoded within train.cpp. You
might need to update their backpropagation code if you want to change the
activation.
https://github.com/neurosim/MLP_NeuroSim_V3.0/blob/cc1386372d01fc022a9bf52cabd8c96e94fb838b/Train.cpp#L507
—
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AI6Q7AWOPYXIRHU4JZRCSQ3YKHEVNAVCNFSM6AAAAABAYNB7HWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRTGE4TKMZUGE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Ögr.Gör.Baki GÖKGÖZ
Gümüşhane Üniversitesi
www.bakigokgoz.com
***@***.***
|
Hello, first of all, thank you for your reply. I changed the part you said as follows, but after a certain iteration, it gets stuck at 10.35%. (I want to use tanh as activation function) // Backpropagation
|
Hello again, I apologize for the disturbance. In this application, we want
to use different activation functions and for this purpose, we are looking
to hire someone to do this job for a certain fee. Can you or someone else
help with this?
Thank you and best regards.
Baki gökgöz ***@***.***>, 19 Ara 2023 Sal, 23:09 tarihinde şunu
yazdı:
… Thank you
Leo Ling ***@***.***>, 19 Ara 2023 Sal, 20:24 tarihinde
şunu yazdı:
> Hi,
>
> Disclaimer it's been a few year since I've looked at this code. It seems
> that the gradient for the activation is hardcoded within train.cpp. You
> might need to update their backpropagation code if you want to change the
> activation.
>
>
> https://github.com/neurosim/MLP_NeuroSim_V3.0/blob/cc1386372d01fc022a9bf52cabd8c96e94fb838b/Train.cpp#L507
>
> —
> Reply to this email directly, view it on GitHub
> <#11 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AI6Q7AWOPYXIRHU4JZRCSQ3YKHEVNAVCNFSM6AAAAABAYNB7HWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRTGE4TKMZUGE>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
--
Ögr.Gör.Baki GÖKGÖZ
Gümüşhane Üniversitesi
www.bakigokgoz.com
***@***.***
--
Ögr.Gör.Baki GÖKGÖZ
Gümüşhane Üniversitesi
www.bakigokgoz.com
***@***.***
|
Hello, when I change the activation function inside the formula.cpp file (for example, when I write the formula for the tanh function), it consistently returns an accuracy value of 9.80%. Do you know why this might be happening?
Thank you for help
The text was updated successfully, but these errors were encountered: