We have heard a lot of good things about AI and how AI can be used to do various things with ease. Today, there are also many malpractices being done with the help of AI. Whatever prompt we give to the AI, the response is given by the AI. If we twist and turn the prompt, we might be able to get any type of response from the AI, which proves that it is all dependent on the prompt we are making.
Self-medication is harmful
One dangerous thing that is common among individuals is getting their medication from the local medical shops. This practice is very common in India. People don't visit doctors, but they visit a nearby pharmacy shop and get their medicines. Sometimes they know their medicine and purchase it by themseleves, or most of the time they tell the symptoms to the pharma shop and they give the medication. This is a very bad practice, and it is done for several reasons. One of the biggest reason is that they cannot afford visiting doctors, but instead, if they pay just 20 rupees or even cheaper, they get the medicine from the local shop guy itself.
Using Chat GPT for medical needs
With modern technology becoming very popular, people are now using Chat GPT for medical consultation. There are both pros and cons to this type of act. Before Chat GPT came into existence, some people searched their symptoms online and got medications accordingly on their own. For some of the common problems, they try to treat themselves. It is very hard to explain how dangerous this can be.
There is much research work already going on in this topic. People use Chat GPT to resolve their mental health issues, and in some cases, the results are amazing. In most of the mental health patients, companionship or not having anyone to talk to is the biggest problem. Today, with the help of Chat GPT, people can overcome this problem. Instead of spending money on a therapist, people start interacting with ChatGPT. Though it will not ask any intellectual questions, it will at least provide a great relief where people will have someone to talk to and explain their problems.
One of the biggest problems with Chat GPT is the real time fact checking capability. It is good at providing solutions to problems, but only if the user is asking the right question. Also the answers are provided only if the user is asking genuine questions. There is no way Chat GPT or any AI model can check if the person is telling a lie or the truth. Fact-checking is not a possibility unless there is a human touch to it. So things can end up dangerous if Chat GPT is not doing the fact check and provides some misleading answers to people who are trying to do a medical consultation with the AI.
I'm sure the technology is getting far advanced and sooner or later we will have AI models who would act as the first diagnosis before having an interation with the doctor. This can be a great usecase where the doctor can use the AI to ask all the basic questions to the patients before having a conversation with them. This saves a lot of time and can help in getting an overview of the case, but providing solutions can be something that should never happen without a real doctor talking to the patient. Even if AI is used just for diagnosing the symptoms, it should be verified completely by the doctors too.
Reference:
https://pmc.ncbi.nlm.nih.gov/articles/PMC10867692/
If you like what I'm doing on Hive, you can vote me as a witness with the links below.
![]() |
![]() |
![]() |
Posted Using INLEO