Human Building Interaction Through Large Language Model
Abstract
Large Language Model (LLM) offers opportunities to enhance Human-Building Interaction (HBI) by enabling more direct interactions through intuitive interfaces to complex smart building systems of systems. These systems can be characterized by the vast amounts of data across multiple formats, the lack of nonconfidential and generalizable information, and the requirement of domain expertise for interpretation. Applying LLMs to domain-specific tasks like HBI also presents additional challenges. Limited training data makes traditional fine-tuning approaches less practical. Meanwhile, the opacity of LLM training data requires careful integration of domain knowledge to ensure reliable responses. Additionally, different LLMs exhibit varying alignment characteristics, suggesting that achieving both natural interaction and technical accuracy requires a multi-agent approach. These challenges highlight the need for innovative approaches to adapt LLMs for specialized domains while maintaining both accuracy and user engagement. In this paper, we develop a zero-shot LLM-based multi-agent system framework for HBI that addresses these challenges, enabling scalable implementation in smart buildings through integration with real-time databases, code repositories, and technical documents. The developed framework has been successfully trained, tested, and validated using a data set from more than 200 commercial buildings. Results tested on the HBI domain demonstrate the effectiveness in providing accurate and contextual responses for diverse users including stakeholders, from tenants to building managers, across various building system applications.