Abstract:
We study whether large language models (LLMs) can generate suitable financial advice and which LLM features are associated with higher-quality advice. To this end, we elicit portfolio recommendations from 32 LLMs for 64 investor profiles, which differ in their risk preferences, home country, sustainability preferences, gender, and investment experience. Our results suggest that LLMs are generally capable of generating suitable financial advice that takes into account important investor characteristics when determining market and risk exposures. The historical performance of the recommended portfolios is on par with that of professionally managed benchmark portfolios. We also find that foundation models and larger models generate portfolios that are easier to implement and more sensitive to investor characteristics than fine-tuned models and smaller models. Some of our results are consistent with LLMs inheriting human biases such as home bias. We find no evidence of gender-based discrimination, which can be found in human financial advice.