Perhaps I'm not sufficiently familiar with the literature; it all seems both obvious and largely irrelevant. Last paragraph is: >> This research supports that fears do exist in the online environment and that fears are learned through direct experience and vicariously. There is a need for a richer theoretical understanding of online fear and distrust to help practitioners better manage fear and distrust, and to reduce the impacts of emotional learning. <<
Maybe I'm reading it wrong, but the emphasis seems to be on reducing the amount of distrust users have after they've been cheated, rather than on making it less likely that they will be cheated.
Your interpretation sounds plausible. It matches the business practices that I typically see.
After all, look at the prevalence of companies using popups that say "We know you like cookies" (gaslighting) and "By entering this site you consent" (implying that presence equals consent, which is legally false). Companies don't care about people. They just want to manipulate customers to get a desired goal.
Maybe I'm just being a curmudgeon, but
>> This research supports that fears do exist in the online environment and that fears are learned through direct experience and vicariously. There is a need for a richer theoretical understanding of online fear and distrust to help practitioners better manage fear and distrust, and to reduce the impacts of emotional learning. <<
Maybe I'm reading it wrong, but the emphasis seems to be on reducing the amount of distrust users have after they've been cheated, rather than on making it less likely that they will be cheated.
Re: Maybe I'm just being a curmudgeon, but
After all, look at the prevalence of companies using popups that say "We know you like cookies" (gaslighting) and "By entering this site you consent" (implying that presence equals consent, which is legally false). Companies don't care about people. They just want to manipulate customers to get a desired goal.