TY - GEN
T1 - Real Estate Price Prediction on GenerativeLanguage Models
AU - Zhao, Yun
AU - Chetty, Girja
AU - Tran, Dat
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Real estate prediction is an important field of study that can assist home sellers and property investors in making informed decisions to maximize their profits. However, predicting real estate prices presents a significant challenge in the temporal dimension because house prices fluctuate over time in response to market dynamics and are influenced by a complex array of factors, such as location, quality of the house and so on. This paper focuses on leveraging the power of transformer-based language models and self-attentions for real estate house price prediction. The Transformer architecture is well-known for its ability to understand the relationships between words or tokens in a sequence of human language and process the entire sequence in parallel, enabling more efficient and scalable computations. Our study explores the fine-tuning of attention mechanisms and output hidden states from Transformer-based models, comparing their performance against baseline models. Through our experimentation and analysis, the results demonstrate that the transformer-based attention models outperform the baseline models for real estate price prediction. We also discovered that utilizing self-attentions from unsupervised text learning can enhance the accuracy of real estate price prediction.
AB - Real estate prediction is an important field of study that can assist home sellers and property investors in making informed decisions to maximize their profits. However, predicting real estate prices presents a significant challenge in the temporal dimension because house prices fluctuate over time in response to market dynamics and are influenced by a complex array of factors, such as location, quality of the house and so on. This paper focuses on leveraging the power of transformer-based language models and self-attentions for real estate house price prediction. The Transformer architecture is well-known for its ability to understand the relationships between words or tokens in a sequence of human language and process the entire sequence in parallel, enabling more efficient and scalable computations. Our study explores the fine-tuning of attention mechanisms and output hidden states from Transformer-based models, comparing their performance against baseline models. Through our experimentation and analysis, the results demonstrate that the transformer-based attention models outperform the baseline models for real estate price prediction. We also discovered that utilizing self-attentions from unsupervised text learning can enhance the accuracy of real estate price prediction.
KW - generative AI
KW - house price prediction
KW - real estate
KW - transformer attention
UR - http://www.scopus.com/inward/record.url?scp=85190617158&partnerID=8YFLogxK
UR - https://ieeexplore.ieee.org/xpl/conhome/10487636/proceeding
UR - https://ieee-csde.org/csde2023/
U2 - 10.1109/CSDE59766.2023.10487658
DO - 10.1109/CSDE59766.2023.10487658
M3 - Conference contribution
AN - SCOPUS:85190617158
T3 - Proceedings of the 2023 IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2023
SP - 1
EP - 7
BT - Proceedings of the 2023 IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2023
PB - IEEE, Institute of Electrical and Electronics Engineers
T2 - 2023 IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2023
Y2 - 4 December 2023 through 6 December 2023
ER -