APA's PsycArticles contains more than 250,000 full text research articles in psychology but many of these articles have real world business implications relevant to researchers and students in academic disciplines outside psychology and social sciences. Here is an interesting example on Artificial Intelligence.
A recently published article in Artificial Intelligence and Organizational
Strategy: Ethical and Governance Implications by Larry W. Norton (Consulting Psychology Journal, Vol 77(2), Jun 2025, 131-141) addresses not just how AI will fundamentally redefine
work but how we must implement governance and take management responsibility to
manage outcomes – both intended and unintended.
As Norton’s research makes clear, AI is not just about automation; AI is shaping strategy, driving innovation, and changing the definition of everything from customer value to product development. Referencing several AI use cases including CapitalOne and Mayo Clinic, Norton proffers that if you’re not thinking about AI strategically, you’re already behind.
There are real risks—algorithmic bias, data privacy, even unintended societal harm and the article is clear we need to be smart about how AI is deployed, balancing the pursuit of profit with responsible implementation, starting with low-risk, high-impact use cases and scaling up responsibly as capabilities mature. Even a casual awareness of how AI has dominated recent business and economic commentary would make clear that we must be smart about how AI is deployed. This is where Norton offers a formula – based on research – for corporations to manage their AI implementations and make governance not simply a ‘nice to have’ but a business imperative.
The author lays out what effective AI governance looks like: everything from ensuring data quality to having clear stakeholder accountability, regulatory compliance and crucially, ethical oversight. Norton references established frameworks like the National Institute of Standards andTechnology’s AI Risk Management Framework and the IEEE’s Ethically Aligned Design, stressing transparency, explainability, fairness, and accountability in AI systems.
Norton suggests three key recommendations:
- treat AI risk mitigation as a competitive differentiator,
- tailor governance models to the ethical and business risks of specific AI applications,
- start small and scale responsibly
He also calls for increased AI literacy among consultants and organizational leaders, arguing that understanding AI’s capabilities and limitations are essential for ethical and effective deployment and significantly that AI oversight needs to go right up to board level particularly in firms where AI development and/or implementation is central to the business strategy.
Find this article: Consulting Psychology Journal, Vol 77(2), Jun 2025, 131-141
A sample of further PsycArticle readings on the intersection of psychology and Artificial Intelligence include:
- How and for whom using generative AI affects creativity: A field experiment.
- Unethical and harmful effects of artificial intelligence on human interactions and well-being: What organizational consultants can do.
- Consultants’ and managers’ ethical and legal responsibilities in artificial intelligence applications.