You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am wondering if there is a demographic background history
I would like to know statistics on the person or persons or academics etc who report CVE's and then those CVE's and their severity level.
The point is to highlight the importance of independent security researchers as well as groups and or startup groups who are either wishing to take part in bug
bounty programs or to start getting contracts for security work. There are also people who this out of pure curiosity and interest in computing, as is the case with many people in the world. Some people are trying to get an understanding or an education and embedded systems, software design, and or the security of both of these areas of specialty.
Many see it as a challenge and kind of like a lock which they want to pick to understand and see how it works or if it's truly secure. It's my wish as both a person who understands their world and a person who is a lawyer who has worked as a lawyer globally to not only inform new or unaware that "companies" saying they are going to pay you these enormous amounts of money for exploits see NOT legal. Especially companies that have nothing to do with the product or the hardware and whose headquarters are in jurisdictions with very questionable human rights histories and high levels on any corruption index. There is never enough money that would make it worth doing that and the risk you're putting yourself not only legally but potentially even physical harm as a recent lecturer at Defcon 2023 ranked his work on looking for command and control servers ranked his work based on the likelihood that he could get "vanned". And the reality of the world is that when a group or a company or someone says that they're not going to use this against a non-NATO members and they're offering you 20 million dollars or whatever the amount, it doesn't matter, it's never going to be worth it because they might pay you that money and then you might disappear and anyone who looks at that situation and they're going to say all he did something or she did something wrong. That's the reality we are living in today. It's never going to be legal and it's never going to be right because you're putting people's lives, health, and safety potentially in danger and even if the law doesn't specifucally state it, in many countries you're still breaking the law by selling anything or even offering to sell, to a "company" just because it's a company. This doesn't mean that it is legal. At the same time, I want to point out to government and industry as well as software, firmware, and embedded systems designers etc that independent ethical hackers, and
independent cybersecurity researchers are a significant and substantial member of the team and are on the right side and are doing the right thing. They
should be treated like that and there should be a better mechanism and a more homogeneous way to report. But, in the meantime until there is some agreed upon a set of protocols, protect yourself.
You should follow some guidelines that I might offer up for discussion purposes and circulate amongst civil society, which means a broad group of people who are engaged in this and see what everyone thinks about what I have to propose. It would be like a model International guideline or protocol which would help not only protect you as an individuals or ethical hackers but also help government and cybersecurity apparatuses and the intelligence community to have a really clear cut way to help you to disclose exploits, proof of concepts, and known or discovered flaws in the correct manner.
The gonements and companies
should treat you correctly and responding to you coming forth with this information respectfully and also assuring that you get a financial reward and a bug bounty program benefit, if that is in place or just recognition, because a lot of people put a significant amount of time, effort, and work and to finding flaws that companies in many cases negligently and in some cases grossly negligently leave in their devices firmware or code, despite repeated attempts to inform them of best practices. By having a model set of guidelines not only helps to protect ethical hackers but it also helps government to help you to be fairly treated when it comes to any kind of bug bounty. It also allows government the time to adjust their systems and to implement protections on a national level even if the companies are slow. By taking a certain route whereby you make a disclosure to the national authorities and then through them to the company or software maker in question, this applies especially when you're in the upper medium to critical levels, you know you're putting them on your side and you're presenting yourself as a responsible researcher or ethical hacker whichever word or title you wish to call yourself. So regardless of how quickly or, as in many cases we all know, very slowly that companies sometimes don't patch and don't fix huge gaping security flaws, which are in some cases represent ongoing neglect, at least the national cybersecurity apparatuses will have understanding of it and be able to respond the national and international level with their partners. It's my goal to highlight the fact that in many cases we have a big debt to pay to individuals and even groups of individuals who may or may not do it for money, do it just because it's part of who they are, how they grew up, and what they enjoy. And, I can say that I've been both an Assistant Attorney General, a Deputy County Attorney, and I did a little bit of criminal defense as well. I wanted to see the other side of the table, as my first boss as a prosecutor reminded me of the rules of practice and professional conduct, that the person sitting in the defendant's seat is one of the people you are sworn to protect, "just be just!" It was the best and is the accurate way all prosecutors should act and behave and many forget this in their rush to get convictions and stack up wins to fulfill whatever job prospects they might foresee in their future. I have seen my share of overzealous and agressive prosecutions where the facts clearly present in an different direction. I'm glad to have had such profound guidance form the very beginning. I also know the law enforcement officers who have done the job for such long time they fall pray to seeing everyone as bad as their jobs are mentally difficult and demanding as they do see the worst side of humanity often. I understand the needs of law enforcement and intelligence communities, as well as the protection of national infrastructure and by going through these people, they can also attest to your compliance with a bugbounty program. If a bounty program denys you the right to confidentially report to the national cybersecurity apparatuses, it would seem like a void clause as it would be "violative of public policy". In a way you have assured quick response to critical exploits and have a way to put the "public good" on your side. This will pressure the company to patch and fix any exploits or bugs and help you get the deserved level of bounty or recognition you have earned. The government should be mindful of this and stand with the researcher whenever or if a conflict
develps but it would be ideal if this isn't the case often if ever. This puts the national government and cybersecurity apparatuses, as well as intelligence agencies on your side if it's for a bug bounty or if you're just doing it for career advancement. It shows that you can be trusted, that you're responsible, demonstrates your abilities, and that's going to be your payday. That payday far outweighs any sum of money that any of these sketchy "companies" are offering on the free market. You know they are going to use whatever exploit or the fruits of your intelligence and labor to harm other people and there's no amount of money that they can pay you that will be worth it at the end of the day. I can assure you that you don't ever want to be anywhere near the world's in which those people live.
So, in simple, be the hero you know go home with the job, the respect, the honor, and the glory. Be on the good side, and sleep well in any case.
So, I don't think one size fits all as far as protocols and responsible disclosures go but I think there's going to be like several different types but I surely you should begin with writing down what you're doing, your goals, set an internal security policy to make sure there's no accidental or premature disclosure of any of your security research, make it clear to those around you or on your team, and perhaps there could be a confidential repository set up via neural organization, like a safe deposit box where you can "register" your work which is only able to be opened by you but which carries a time stamp. Just one idea but there could be many to document what your doing and have it known or witnessed by a third party. If there is ever a question about what you doing at least have documentation on hand signed by a friend, a professor, a supervisor, or a lawyer for example. If your in a team an agreement between all of you how the data is to be maintained and your disclosure protocols. Make sure that there's no accidental leaks and no disagreement between the members of the team. I can give a really good example of one that probably touches everybody's domain and is a good example of responsible disclosure and the protection of life and safety, the TETRA research. I'm not sure if the response by ERSI was ideal nor am I agreeing with thier solution because an open source system should be as solid as it can be made and they should invite people to try and break it, if the goal is to use it to operate critical infrastructure, you want as much security testing as you can get. There's one phrase that keeps on coming up every year by somebody "security by obscurity is stupidity".
And every year there's a story about a company who took a year, two, or three years to patch firmware that had public and private keys in plaintext (this year it was Google's and Apple's on a major security appliance and service provider).
And, a formal protocol that involves the national cybersecurity apparatuses, helps prevent companies from playing games with the manipulation of CVE severity, something again which I've also heard repeatedly. I think this is another way that people who are involved in this research can assure that the correct severity level gets applied to their research again by going through a national cybersecurity apparatus and the associated intelligence infrastructure because all of this stuff on some level can have national security implications for every country that uses a system like TETRA for example or current the new DDoS attack methodology as it affects the whole system. It seems reasonable that they need to hear about it. I don't care who makes it the item, provides the service, or manufacturers a device. If it is serious, they're going to make sure that the maker fixes that quick and following responsible disclosure guidelines, you don't publish, you don't say anything, until they patch or fixed it but that should be done in a reasonable time frame. I can guarantee you that the national cybersecurity apparatus and intelligence services are going to be making sure that gets done really quick. And perhaps that is not necessary, as in non-critical bugs, in all cases but it's not a bad idea to make a simultaneous disclosure because they need to know as much as the company needs to know, especially if it affects the entire infrastructure of any given country. In addition, putting yourself middle geopolitical issue has an individual, and I don't care if it's for 20 million or 100 million, as I said never going to be enough money to be worth it.
By doing it correctly and responsibly you will reap the rewards of doing so and sleep well at night, make a good living, retain your self-esteem, your honor, and your integrity. You also will gain the respect of countless people for having done the right thing and then you're free to publish once it's all been said and done. You can take credit for the for the work you've done. Your likely able to get whatever job you want, if you looking for a job advancement, that's one way to certainly get it.
So, I must remind you that the laws of every country differ and as always consult a lawyer wherever you live to get legal advice that is applicable to where you live or are working. This is merely meant to open a dialogue to address many issues that seem to repeat over and over year after year which needs addesssing, as we seem to not be learning from the mistakes that are uncovered year after year. I am not anyone's counsel but I am a lawyer and there are my opinions but I do believe them to be sound ones and highlight many problems thst keep coming up year after year which need fixing. I also hear stories year after year of people who are legitimate security researchers being subpoenaed or arrested by law enforcement when they could have just looked on LinkedIn or made a phone call to determine the legitimacy of this person's research and or their status as a ethical hacker. On the same token those who use that term or that label inappropriately and try and invoke it to shield themselves from unlawful behavior should be very careful not to end up on the wrong side of what I've just said above.
It is never going to be worth it as you have one life to live and life is precious but more fragile than some think it to be. I am without reservation in the side of ethcial hackers who discover security flaws of any type. Be it nailing scammers who are stealing the life savings if the elder population, find holes which people with bad intent would exploit if they were to find them first. I wish to inform those who think that a quick buck from a random company is seeking to violate the human rights of others and squelch freedom of speech or expression. Also, remember that in some countries to speak your mind could mean the end of your life so with great power comes great responsibility. Keep doing the right thing.
The text was updated successfully, but these errors were encountered:
I am wondering if there is a demographic background history
I would like to know statistics on the person or persons or academics etc who report CVE's and then those CVE's and their severity level.
The point is to highlight the importance of independent security researchers as well as groups and or startup groups who are either wishing to take part in bug
bounty programs or to start getting contracts for security work. There are also people who this out of pure curiosity and interest in computing, as is the case with many people in the world. Some people are trying to get an understanding or an education and embedded systems, software design, and or the security of both of these areas of specialty.
Many see it as a challenge and kind of like a lock which they want to pick to understand and see how it works or if it's truly secure. It's my wish as both a person who understands their world and a person who is a lawyer who has worked as a lawyer globally to not only inform new or unaware that "companies" saying they are going to pay you these enormous amounts of money for exploits see NOT legal. Especially companies that have nothing to do with the product or the hardware and whose headquarters are in jurisdictions with very questionable human rights histories and high levels on any corruption index. There is never enough money that would make it worth doing that and the risk you're putting yourself not only legally but potentially even physical harm as a recent lecturer at Defcon 2023 ranked his work on looking for command and control servers ranked his work based on the likelihood that he could get "vanned". And the reality of the world is that when a group or a company or someone says that they're not going to use this against a non-NATO members and they're offering you 20 million dollars or whatever the amount, it doesn't matter, it's never going to be worth it because they might pay you that money and then you might disappear and anyone who looks at that situation and they're going to say all he did something or she did something wrong. That's the reality we are living in today. It's never going to be legal and it's never going to be right because you're putting people's lives, health, and safety potentially in danger and even if the law doesn't specifucally state it, in many countries you're still breaking the law by selling anything or even offering to sell, to a "company" just because it's a company. This doesn't mean that it is legal. At the same time, I want to point out to government and industry as well as software, firmware, and embedded systems designers etc that independent ethical hackers, and
independent cybersecurity researchers are a significant and substantial member of the team and are on the right side and are doing the right thing. They
should be treated like that and there should be a better mechanism and a more homogeneous way to report. But, in the meantime until there is some agreed upon a set of protocols, protect yourself.
You should follow some guidelines that I might offer up for discussion purposes and circulate amongst civil society, which means a broad group of people who are engaged in this and see what everyone thinks about what I have to propose. It would be like a model International guideline or protocol which would help not only protect you as an individuals or ethical hackers but also help government and cybersecurity apparatuses and the intelligence community to have a really clear cut way to help you to disclose exploits, proof of concepts, and known or discovered flaws in the correct manner.
The gonements and companies
should treat you correctly and responding to you coming forth with this information respectfully and also assuring that you get a financial reward and a bug bounty program benefit, if that is in place or just recognition, because a lot of people put a significant amount of time, effort, and work and to finding flaws that companies in many cases negligently and in some cases grossly negligently leave in their devices firmware or code, despite repeated attempts to inform them of best practices. By having a model set of guidelines not only helps to protect ethical hackers but it also helps government to help you to be fairly treated when it comes to any kind of bug bounty. It also allows government the time to adjust their systems and to implement protections on a national level even if the companies are slow. By taking a certain route whereby you make a disclosure to the national authorities and then through them to the company or software maker in question, this applies especially when you're in the upper medium to critical levels, you know you're putting them on your side and you're presenting yourself as a responsible researcher or ethical hacker whichever word or title you wish to call yourself. So regardless of how quickly or, as in many cases we all know, very slowly that companies sometimes don't patch and don't fix huge gaping security flaws, which are in some cases represent ongoing neglect, at least the national cybersecurity apparatuses will have understanding of it and be able to respond the national and international level with their partners. It's my goal to highlight the fact that in many cases we have a big debt to pay to individuals and even groups of individuals who may or may not do it for money, do it just because it's part of who they are, how they grew up, and what they enjoy. And, I can say that I've been both an Assistant Attorney General, a Deputy County Attorney, and I did a little bit of criminal defense as well. I wanted to see the other side of the table, as my first boss as a prosecutor reminded me of the rules of practice and professional conduct, that the person sitting in the defendant's seat is one of the people you are sworn to protect, "just be just!" It was the best and is the accurate way all prosecutors should act and behave and many forget this in their rush to get convictions and stack up wins to fulfill whatever job prospects they might foresee in their future. I have seen my share of overzealous and agressive prosecutions where the facts clearly present in an different direction. I'm glad to have had such profound guidance form the very beginning. I also know the law enforcement officers who have done the job for such long time they fall pray to seeing everyone as bad as their jobs are mentally difficult and demanding as they do see the worst side of humanity often. I understand the needs of law enforcement and intelligence communities, as well as the protection of national infrastructure and by going through these people, they can also attest to your compliance with a bugbounty program. If a bounty program denys you the right to confidentially report to the national cybersecurity apparatuses, it would seem like a void clause as it would be "violative of public policy". In a way you have assured quick response to critical exploits and have a way to put the "public good" on your side. This will pressure the company to patch and fix any exploits or bugs and help you get the deserved level of bounty or recognition you have earned. The government should be mindful of this and stand with the researcher whenever or if a conflict
develps but it would be ideal if this isn't the case often if ever. This puts the national government and cybersecurity apparatuses, as well as intelligence agencies on your side if it's for a bug bounty or if you're just doing it for career advancement. It shows that you can be trusted, that you're responsible, demonstrates your abilities, and that's going to be your payday. That payday far outweighs any sum of money that any of these sketchy "companies" are offering on the free market. You know they are going to use whatever exploit or the fruits of your intelligence and labor to harm other people and there's no amount of money that they can pay you that will be worth it at the end of the day. I can assure you that you don't ever want to be anywhere near the world's in which those people live.
So, in simple, be the hero you know go home with the job, the respect, the honor, and the glory. Be on the good side, and sleep well in any case.
So, I don't think one size fits all as far as protocols and responsible disclosures go but I think there's going to be like several different types but I surely you should begin with writing down what you're doing, your goals, set an internal security policy to make sure there's no accidental or premature disclosure of any of your security research, make it clear to those around you or on your team, and perhaps there could be a confidential repository set up via neural organization, like a safe deposit box where you can "register" your work which is only able to be opened by you but which carries a time stamp. Just one idea but there could be many to document what your doing and have it known or witnessed by a third party. If there is ever a question about what you doing at least have documentation on hand signed by a friend, a professor, a supervisor, or a lawyer for example. If your in a team an agreement between all of you how the data is to be maintained and your disclosure protocols. Make sure that there's no accidental leaks and no disagreement between the members of the team. I can give a really good example of one that probably touches everybody's domain and is a good example of responsible disclosure and the protection of life and safety, the TETRA research. I'm not sure if the response by ERSI was ideal nor am I agreeing with thier solution because an open source system should be as solid as it can be made and they should invite people to try and break it, if the goal is to use it to operate critical infrastructure, you want as much security testing as you can get. There's one phrase that keeps on coming up every year by somebody "security by obscurity is stupidity".
And every year there's a story about a company who took a year, two, or three years to patch firmware that had public and private keys in plaintext (this year it was Google's and Apple's on a major security appliance and service provider).
And, a formal protocol that involves the national cybersecurity apparatuses, helps prevent companies from playing games with the manipulation of CVE severity, something again which I've also heard repeatedly. I think this is another way that people who are involved in this research can assure that the correct severity level gets applied to their research again by going through a national cybersecurity apparatus and the associated intelligence infrastructure because all of this stuff on some level can have national security implications for every country that uses a system like TETRA for example or current the new DDoS attack methodology as it affects the whole system. It seems reasonable that they need to hear about it. I don't care who makes it the item, provides the service, or manufacturers a device. If it is serious, they're going to make sure that the maker fixes that quick and following responsible disclosure guidelines, you don't publish, you don't say anything, until they patch or fixed it but that should be done in a reasonable time frame. I can guarantee you that the national cybersecurity apparatus and intelligence services are going to be making sure that gets done really quick. And perhaps that is not necessary, as in non-critical bugs, in all cases but it's not a bad idea to make a simultaneous disclosure because they need to know as much as the company needs to know, especially if it affects the entire infrastructure of any given country. In addition, putting yourself middle geopolitical issue has an individual, and I don't care if it's for 20 million or 100 million, as I said never going to be enough money to be worth it.
By doing it correctly and responsibly you will reap the rewards of doing so and sleep well at night, make a good living, retain your self-esteem, your honor, and your integrity. You also will gain the respect of countless people for having done the right thing and then you're free to publish once it's all been said and done. You can take credit for the for the work you've done. Your likely able to get whatever job you want, if you looking for a job advancement, that's one way to certainly get it.
So, I must remind you that the laws of every country differ and as always consult a lawyer wherever you live to get legal advice that is applicable to where you live or are working. This is merely meant to open a dialogue to address many issues that seem to repeat over and over year after year which needs addesssing, as we seem to not be learning from the mistakes that are uncovered year after year. I am not anyone's counsel but I am a lawyer and there are my opinions but I do believe them to be sound ones and highlight many problems thst keep coming up year after year which need fixing. I also hear stories year after year of people who are legitimate security researchers being subpoenaed or arrested by law enforcement when they could have just looked on LinkedIn or made a phone call to determine the legitimacy of this person's research and or their status as a ethical hacker. On the same token those who use that term or that label inappropriately and try and invoke it to shield themselves from unlawful behavior should be very careful not to end up on the wrong side of what I've just said above.
It is never going to be worth it as you have one life to live and life is precious but more fragile than some think it to be. I am without reservation in the side of ethcial hackers who discover security flaws of any type. Be it nailing scammers who are stealing the life savings if the elder population, find holes which people with bad intent would exploit if they were to find them first. I wish to inform those who think that a quick buck from a random company is seeking to violate the human rights of others and squelch freedom of speech or expression. Also, remember that in some countries to speak your mind could mean the end of your life so with great power comes great responsibility. Keep doing the right thing.
The text was updated successfully, but these errors were encountered: