Predicting Real Estate Prices in Moscow
A Kaggle Competition
University of Washington Professional & Continuing Education
BIG DATA 220B SPRING 2017 FINAL PROJECT
Team D-Hawks
Leo Salemann, Karunakar Kotha, Shiva Vuppala, John Bever, Wenfan Xu
Keywords: Big Data, Kaggle, Machine Learning, Azure ML Studio, Boosted Decision Tree, Neural Network, Regression, Tableau
Problem Description & Datasets
Input Data Description Features Observations
Housing Data Property, neighborhood, sales date & price 292 30,473
Macroeconomics Daily commodity prices, indicators like GDP 100 2,485
Data Dictionary Feature Definitions
Shapefiles Spatial data for maps
ML Studio Flow
1. Load data; select columns
2. Edit Metadata (set datatype)
3. Clean
Missing
Data
4. Clip,
Normalize,
Split
5. Train & Evaluate
Boosted Decision
Tree, Neural Network
Azure ML Studio Experiments - Variations
Name
Strategy Experiment Characteristics Cols Rows
Root Mean
Squared Error
RMSE /
STDEV(price)
Wenfan
Baseline
● Basic 12 real estate features
● Tried 4 regression models, kept 2
13 27,909 2,505,749.58 0.524203184
Leo
Incremental add
● Incrementally add more real estate features
● Omit macroeconomic features
● Detailed Human-in-The-Loop process
64 15,693 2,573,721.30 0.538422878
Shiva
Feature Selection Pre-processor
● Separate Experiment for Feature Selection
(Permutation Feature Importance)
● Joined Macro Data
● Added Retail-specific Features
● Added Decision Forest Regression Module
21 30,471 2,425,862.34 0.507490762
Karunakar
Filter Based Feature Selection
● Filter Based Feature Selection
● Boosted Decision tree
● Decision forest regression
38 14,853 3,054,675.32 0.639038531
John
Parallel Cleansing Paths
● Joined Macro Data
● Start with all fields, gradually remove
● Parallel cleansing paths (set to zero; set to 391 30,471 2,263,084.20 0.473437552
The Winning Experiment
2. Clean Missing Data - Try three Modes
a. Custom Value Substitution (a fixed value i.e. 0)
b. Replace with Mean
c. Replace using Probabilistic PCA
3. Clip, Normalize Split (same for all 3 paths)
- Handling Categorical & Continuous Variables
- Outlier clipping (per-value; not via SQL)
- Data Normalization or Feature Scaling
4. Train & Evaluate - Compare Three different models
a. Poisson Regression
b. Neural Network Regression
c. Boosted Decision Tree Regression
BA C
1. Collecting Data
Final Algorithm Parameters
BA C
Predictions Based on Normalized Inputs
389 input columns
Visualization
Predicted price
VS.
Real price
● Actual “waveform” tracks quite well (peaks and valleys line up)
● Fairly consistent delta - always undershooting by about 500K Rubles
VisualizationVisualization
Error analysis
based on
House property:
square meters
VisualizationVisualization
Error analysis
based on
Geometry property:
districts
Conclusion & Further Work
TWL (Today We Learned)
● Azure ML Studio is great for trying multiple techniques in parallel (try that in python!)
● Many ways to approach the problem.
○ Effort required varies a lot …
○ So does the quality of the results.
Next time …
● Watch those row counts … did you lose any?
● Deploy Web Service earlier and more often.
Someday/Oneday …
● Use different models for different subclasses of real estate.
THANKYOU!
Appendix
Experiment Variation Details & Results
More experiment screenshots
Azure ML Studio Experiments - Variations
Name
Strategy Experiment Characteristics Regression Models Notes
Wenfan
Baseline
● Basic 12 real estate features ● Boosted Decision Tree
● Neural Network
● Bayesian Linear
● Linear
Kept Boosted Decision Tree
and Neural Network; dropped
the others.
Leo
Incremental add
● Incrementally add more real estate
features
● Omit macroeconomic features
● Boosted Decision Tree
● Neural Network
Detailed Human-In-The-Loop
(HITL) process.
Shiva
Feature Selection
Pre-processor, add
Macro & Retail
● Joined Macro Data
● Added Retail-specific Features
● Boosted Decision Tree
● Decision Forest
Regression
Separate Experiment for
Feature Selection (Permutation
Feature Importance)
Karunakar
Filter Based Feature
Selection
● Filter Based Feature Selection
● Remove features that aren’t helping
● Boosted Decision Tree
● Forest Regression
Kept Filter Based
Feature,Boosted Decision tree
and Forest regression
John
Parallel Cleansing Paths -
set to 0 vs. median vs.
Probabilistic PCA
● Joined Macro Data
● Start with all fields, gradually remove
● Parallel cleansing paths
● Multiple Boosted Decision
Tree Models
● Poisson
● Neural Network
Multiple simultaneous parallel
paths
Evaluation Metrics
Name
Strategy Cols Rows
Mean Absolute
Error
Root Mean
Squared Error
RMSE /
STDEV(price)
Relative
Absolute
Error
Relative
Squared
Error
Coefficient of
Determination
Wenfan
Baseline
13 27,909 1,448,475.24 2,505,749.58 0.524203184 0.535641 0.386980 0.6130200
Leo
Incremental
add
64 15,693 1,577,436.18 2,573,721.30 0.538422878 0.507266 0.284116 0.7158840
Shiva
Feature
Selection
Pre-processor
21 30,471 1,390,695.31 2,425,862.34 0.507490762 0.521245 0.352367 0.6476330
Karunakar
Filter Based
Feature
Selection
38 14,853 1,874,864.85 3,054,675.32 0.639038531 0.626830 0.439601 0.5603993
John
Parallel
Cleansing
Paths
391 30,471 1,358,929.12 2,263,084.20 0.473437552 0.487444 0.315758 0.6842420
Shiva’s Pre-Processor Experiment
Permutation Feature Importance algorithm to compute importance scores for each of the feature variables of dataset.
1.Load Housing and macro data; Join data
2. Select ALL columns Edit Metadata (set datatype)
3. Split Data
4. Add Permutation Feature Importance Model. Conn: L: Train Model, R: Dataset
Works only for Regression or Classification.
5. Execute Permutation Feature Importance (40 mins).
6. Result lists top most scored features in the dataset.
Karunakar Pre-Processor Experiment
Boosted decision tree algorithm in a decision tree ensemble tends to improve accuracy with some small risk of less coverage.
1.Load Housing data
2. Select columns, Edit Metadata (set datatype)
3. Apply SQL transformations.
4. Filter based feature selection ,normalize data and
split data.
5.choosed Boosted decision tree and decision tree regression to choose the best
predictive.
6 Apply train and score model for each decision algorithm .
7. Evaluate the data model .
Karunakar Variation
1. Filter Based Feature Selection (remove features
that aren’t helping)
2. Decision Forest
Filter Based Feature Selection:
1. Feature selection is the process of selecting those
attributes(Columns) in dataset that are most relevant to the
predictive modeling.
2. By choosing the right features, it can potentially improve the
accuracy and efficiency of classification.
3. Filter Based Feature Selection module to identify the columns in
your input dataset that have the greatest predictive power.
Pearson Correlation:
1. Pearson’s correlation statistics or Pearson’s correlation coefficient
is also known in statistical models as the r value. For any two
variables, it returns a value that indicates the strength of the
correlation.
2. Pearson's correlation coefficient is computed by taking the
covariance of two variables and dividing by the product of their
standard deviations. The coefficient is not affected by changes of
scale in the two variables.
Karunakar Variation
Decision Forest Regression Model:
Decision trees are nonparametric models that perform a sequence of
simple tests for each instance, traversing a binary tree data structure
until a leaf node (decision) is reached.
Decision trees have these advantages:
1. They are efficient in both computation and memory usage
during training and prediction.
2. They can represent non-linear decision boundaries.
3. They perform integrated feature selection and classification
and are resilient in the presence of noisy features.
This regression model consists of an ensemble of decision trees.
Each tree in a regression decision forest outputs a Gaussian
distribution by way of prediction. An aggregation is performed over the
ensemble of trees to find a Gaussian distribution closest to the
combined distribution for all trees in the model.

More Related Content

PPTX
Deep vs diverse architectures for classification problems
PDF
PERFORMANCE ANALYSIS OF CLUSTERING BASED IMAGE SEGMENTATION AND OPTIMIZATION ...
PDF
Caravan insurance data mining prediction models
PDF
DATA PARTITIONING FOR ENSEMBLE MODEL BUILDING
PDF
Matrix Factorization In Recommender Systems
PDF
Study of the Class and Structural Changes Caused By Incorporating the Target ...
PDF
Single to multiple kernel learning with four popular svm kernels (survey)
PPTX
Feature Selection in Machine Learning
Deep vs diverse architectures for classification problems
PERFORMANCE ANALYSIS OF CLUSTERING BASED IMAGE SEGMENTATION AND OPTIMIZATION ...
Caravan insurance data mining prediction models
DATA PARTITIONING FOR ENSEMBLE MODEL BUILDING
Matrix Factorization In Recommender Systems
Study of the Class and Structural Changes Caused By Incorporating the Target ...
Single to multiple kernel learning with four popular svm kernels (survey)
Feature Selection in Machine Learning

What's hot (18)

PPTX
Lect 3 background mathematics
PPTX
House Sale Price Prediction
PPTX
03 Data Mining Techniques
PPTX
Machine Learning by Analogy
PDF
House Price Estimation as a Function Fitting Problem with using ANN Approach
PPTX
Survival Analysis Superlearner
PDF
Recommender systems in practice
PDF
Handling Missing Attributes using Matrix Factorization 
PDF
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
PPTX
Introduction to Linear Discriminant Analysis
PPTX
Feature scaling
PPTX
Morse-Smale Regression
PPTX
Exploring Simple Siamese Representation Learning
PDF
Basics of Clustering
PDF
A Review on Feature Selection Methods For Classification Tasks
PDF
Collaborative Filtering 2: Item-based CF
PPTX
Topology for data science
PDF
Face Recognition Using Neural Networks
Lect 3 background mathematics
House Sale Price Prediction
03 Data Mining Techniques
Machine Learning by Analogy
House Price Estimation as a Function Fitting Problem with using ANN Approach
Survival Analysis Superlearner
Recommender systems in practice
Handling Missing Attributes using Matrix Factorization 
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
Introduction to Linear Discriminant Analysis
Feature scaling
Morse-Smale Regression
Exploring Simple Siamese Representation Learning
Basics of Clustering
A Review on Feature Selection Methods For Classification Tasks
Collaborative Filtering 2: Item-based CF
Topology for data science
Face Recognition Using Neural Networks
Ad

Similar to Predicting Moscow Real Estate Prices with Azure Machine Learning (20)

PPTX
FINAL REVIEW PPT UpdatedFINAL REVIEW PPT Updated
PDF
Intro to Machine Learning by Microsoft Ventures
PPTX
introduction to Statistical Theory.pptx
PPTX
Dimensionality Reduction in Machine Learning
PPTX
laptop price prediction presentation
PDF
Heuristic design of experiments w meta gradient search
PDF
ML-Unit-4.pdf
PPTX
EDAB - Principal Components Analysis and Classification -Module - 5.pptx
PPTX
Principal Component Analysis (PCA).pptx
PPTX
Week 12 Dimensionality Reduction Bagian 1
PDF
13_Data Preprocessing in Python.pptx (1).pdf
PPTX
data science module-3 power point presentation
PPTX
Feature Engineering Fundamentals Explained.pptx
PPT
NEURAL Network Design Training
PDF
The Power of Auto ML and How Does it Work
PDF
Practical Data Science: Data Modelling and Presentation
PPTX
CSL0777-L07.pptx
PPTX
Machine Learning: Transforming Data into Insights
PDF
Credit Risk Assessment: A Comparative Analysis of Classifiers.pdf
PPTX
Deep learning(UNIT 3) BY Ms SURBHI SAROHA
FINAL REVIEW PPT UpdatedFINAL REVIEW PPT Updated
Intro to Machine Learning by Microsoft Ventures
introduction to Statistical Theory.pptx
Dimensionality Reduction in Machine Learning
laptop price prediction presentation
Heuristic design of experiments w meta gradient search
ML-Unit-4.pdf
EDAB - Principal Components Analysis and Classification -Module - 5.pptx
Principal Component Analysis (PCA).pptx
Week 12 Dimensionality Reduction Bagian 1
13_Data Preprocessing in Python.pptx (1).pdf
data science module-3 power point presentation
Feature Engineering Fundamentals Explained.pptx
NEURAL Network Design Training
The Power of Auto ML and How Does it Work
Practical Data Science: Data Modelling and Presentation
CSL0777-L07.pptx
Machine Learning: Transforming Data into Insights
Credit Risk Assessment: A Comparative Analysis of Classifiers.pdf
Deep learning(UNIT 3) BY Ms SURBHI SAROHA
Ad

Recently uploaded (20)

PPTX
transformers as a tool for understanding advance algorithms in deep learning
PDF
Hikvision-IR-PPT---EN.pdfSADASDASSAAAAAAAAAAAAAAA
PDF
©️ 02_SKU Automatic SW Robotics for Microsoft PC.pdf
PDF
REPORT CARD OF GRADE 2 2025-2026 MATATAG
PDF
2025-08 San Francisco FinOps Meetup: Tiering, Intelligently.
PPT
expt-design-lecture-12 hghhgfggjhjd (1).ppt
PPTX
PPT for Diseases (1)-2, types of diseases.pptx
PPTX
inbound2857676998455010149.pptxmmmmmmmmm
PPTX
Stats annual compiled ipd opd ot br 2024
PDF
Session 11 - Data Visualization Storytelling (2).pdf
PPTX
DATA ANALYTICS COURSE IN PITAMPURA.pptx
PPTX
Hushh.ai: Your Personal Data, Your Business
PDF
CS3352FOUNDATION OF DATA SCIENCE _1_MAterial.pdf
PPT
Classification methods in data analytics.ppt
PPTX
lung disease detection using transfer learning approach.pptx
PPT
dsa Lec-1 Introduction FOR THE STUDENTS OF bscs
PPTX
OJT-Narrative-Presentation-Entrep-group.pptx_20250808_102837_0000.pptx
PPTX
Chapter security of computer_8_v8.1.pptx
PPTX
Machine Learning and working of machine Learning
PPTX
Statisticsccdxghbbnhhbvvvvvvvvvv. Dxcvvvhhbdzvbsdvvbbvv ccc
transformers as a tool for understanding advance algorithms in deep learning
Hikvision-IR-PPT---EN.pdfSADASDASSAAAAAAAAAAAAAAA
©️ 02_SKU Automatic SW Robotics for Microsoft PC.pdf
REPORT CARD OF GRADE 2 2025-2026 MATATAG
2025-08 San Francisco FinOps Meetup: Tiering, Intelligently.
expt-design-lecture-12 hghhgfggjhjd (1).ppt
PPT for Diseases (1)-2, types of diseases.pptx
inbound2857676998455010149.pptxmmmmmmmmm
Stats annual compiled ipd opd ot br 2024
Session 11 - Data Visualization Storytelling (2).pdf
DATA ANALYTICS COURSE IN PITAMPURA.pptx
Hushh.ai: Your Personal Data, Your Business
CS3352FOUNDATION OF DATA SCIENCE _1_MAterial.pdf
Classification methods in data analytics.ppt
lung disease detection using transfer learning approach.pptx
dsa Lec-1 Introduction FOR THE STUDENTS OF bscs
OJT-Narrative-Presentation-Entrep-group.pptx_20250808_102837_0000.pptx
Chapter security of computer_8_v8.1.pptx
Machine Learning and working of machine Learning
Statisticsccdxghbbnhhbvvvvvvvvvv. Dxcvvvhhbdzvbsdvvbbvv ccc

Predicting Moscow Real Estate Prices with Azure Machine Learning

  • 1. Predicting Real Estate Prices in Moscow A Kaggle Competition University of Washington Professional & Continuing Education BIG DATA 220B SPRING 2017 FINAL PROJECT Team D-Hawks Leo Salemann, Karunakar Kotha, Shiva Vuppala, John Bever, Wenfan Xu Keywords: Big Data, Kaggle, Machine Learning, Azure ML Studio, Boosted Decision Tree, Neural Network, Regression, Tableau
  • 2. Problem Description & Datasets Input Data Description Features Observations Housing Data Property, neighborhood, sales date & price 292 30,473 Macroeconomics Daily commodity prices, indicators like GDP 100 2,485 Data Dictionary Feature Definitions Shapefiles Spatial data for maps
  • 3. ML Studio Flow 1. Load data; select columns 2. Edit Metadata (set datatype) 3. Clean Missing Data 4. Clip, Normalize, Split 5. Train & Evaluate Boosted Decision Tree, Neural Network
  • 4. Azure ML Studio Experiments - Variations Name Strategy Experiment Characteristics Cols Rows Root Mean Squared Error RMSE / STDEV(price) Wenfan Baseline ● Basic 12 real estate features ● Tried 4 regression models, kept 2 13 27,909 2,505,749.58 0.524203184 Leo Incremental add ● Incrementally add more real estate features ● Omit macroeconomic features ● Detailed Human-in-The-Loop process 64 15,693 2,573,721.30 0.538422878 Shiva Feature Selection Pre-processor ● Separate Experiment for Feature Selection (Permutation Feature Importance) ● Joined Macro Data ● Added Retail-specific Features ● Added Decision Forest Regression Module 21 30,471 2,425,862.34 0.507490762 Karunakar Filter Based Feature Selection ● Filter Based Feature Selection ● Boosted Decision tree ● Decision forest regression 38 14,853 3,054,675.32 0.639038531 John Parallel Cleansing Paths ● Joined Macro Data ● Start with all fields, gradually remove ● Parallel cleansing paths (set to zero; set to 391 30,471 2,263,084.20 0.473437552
  • 5. The Winning Experiment 2. Clean Missing Data - Try three Modes a. Custom Value Substitution (a fixed value i.e. 0) b. Replace with Mean c. Replace using Probabilistic PCA 3. Clip, Normalize Split (same for all 3 paths) - Handling Categorical & Continuous Variables - Outlier clipping (per-value; not via SQL) - Data Normalization or Feature Scaling 4. Train & Evaluate - Compare Three different models a. Poisson Regression b. Neural Network Regression c. Boosted Decision Tree Regression BA C 1. Collecting Data
  • 7. Predictions Based on Normalized Inputs 389 input columns
  • 8. Visualization Predicted price VS. Real price ● Actual “waveform” tracks quite well (peaks and valleys line up) ● Fairly consistent delta - always undershooting by about 500K Rubles
  • 11. Conclusion & Further Work TWL (Today We Learned) ● Azure ML Studio is great for trying multiple techniques in parallel (try that in python!) ● Many ways to approach the problem. ○ Effort required varies a lot … ○ So does the quality of the results. Next time … ● Watch those row counts … did you lose any? ● Deploy Web Service earlier and more often. Someday/Oneday … ● Use different models for different subclasses of real estate.
  • 13. Appendix Experiment Variation Details & Results More experiment screenshots
  • 14. Azure ML Studio Experiments - Variations Name Strategy Experiment Characteristics Regression Models Notes Wenfan Baseline ● Basic 12 real estate features ● Boosted Decision Tree ● Neural Network ● Bayesian Linear ● Linear Kept Boosted Decision Tree and Neural Network; dropped the others. Leo Incremental add ● Incrementally add more real estate features ● Omit macroeconomic features ● Boosted Decision Tree ● Neural Network Detailed Human-In-The-Loop (HITL) process. Shiva Feature Selection Pre-processor, add Macro & Retail ● Joined Macro Data ● Added Retail-specific Features ● Boosted Decision Tree ● Decision Forest Regression Separate Experiment for Feature Selection (Permutation Feature Importance) Karunakar Filter Based Feature Selection ● Filter Based Feature Selection ● Remove features that aren’t helping ● Boosted Decision Tree ● Forest Regression Kept Filter Based Feature,Boosted Decision tree and Forest regression John Parallel Cleansing Paths - set to 0 vs. median vs. Probabilistic PCA ● Joined Macro Data ● Start with all fields, gradually remove ● Parallel cleansing paths ● Multiple Boosted Decision Tree Models ● Poisson ● Neural Network Multiple simultaneous parallel paths
  • 15. Evaluation Metrics Name Strategy Cols Rows Mean Absolute Error Root Mean Squared Error RMSE / STDEV(price) Relative Absolute Error Relative Squared Error Coefficient of Determination Wenfan Baseline 13 27,909 1,448,475.24 2,505,749.58 0.524203184 0.535641 0.386980 0.6130200 Leo Incremental add 64 15,693 1,577,436.18 2,573,721.30 0.538422878 0.507266 0.284116 0.7158840 Shiva Feature Selection Pre-processor 21 30,471 1,390,695.31 2,425,862.34 0.507490762 0.521245 0.352367 0.6476330 Karunakar Filter Based Feature Selection 38 14,853 1,874,864.85 3,054,675.32 0.639038531 0.626830 0.439601 0.5603993 John Parallel Cleansing Paths 391 30,471 1,358,929.12 2,263,084.20 0.473437552 0.487444 0.315758 0.6842420
  • 16. Shiva’s Pre-Processor Experiment Permutation Feature Importance algorithm to compute importance scores for each of the feature variables of dataset. 1.Load Housing and macro data; Join data 2. Select ALL columns Edit Metadata (set datatype) 3. Split Data 4. Add Permutation Feature Importance Model. Conn: L: Train Model, R: Dataset Works only for Regression or Classification. 5. Execute Permutation Feature Importance (40 mins). 6. Result lists top most scored features in the dataset.
  • 17. Karunakar Pre-Processor Experiment Boosted decision tree algorithm in a decision tree ensemble tends to improve accuracy with some small risk of less coverage. 1.Load Housing data 2. Select columns, Edit Metadata (set datatype) 3. Apply SQL transformations. 4. Filter based feature selection ,normalize data and split data. 5.choosed Boosted decision tree and decision tree regression to choose the best predictive. 6 Apply train and score model for each decision algorithm . 7. Evaluate the data model .
  • 18. Karunakar Variation 1. Filter Based Feature Selection (remove features that aren’t helping) 2. Decision Forest Filter Based Feature Selection: 1. Feature selection is the process of selecting those attributes(Columns) in dataset that are most relevant to the predictive modeling. 2. By choosing the right features, it can potentially improve the accuracy and efficiency of classification. 3. Filter Based Feature Selection module to identify the columns in your input dataset that have the greatest predictive power. Pearson Correlation: 1. Pearson’s correlation statistics or Pearson’s correlation coefficient is also known in statistical models as the r value. For any two variables, it returns a value that indicates the strength of the correlation. 2. Pearson's correlation coefficient is computed by taking the covariance of two variables and dividing by the product of their standard deviations. The coefficient is not affected by changes of scale in the two variables.
  • 19. Karunakar Variation Decision Forest Regression Model: Decision trees are nonparametric models that perform a sequence of simple tests for each instance, traversing a binary tree data structure until a leaf node (decision) is reached. Decision trees have these advantages: 1. They are efficient in both computation and memory usage during training and prediction. 2. They can represent non-linear decision boundaries. 3. They perform integrated feature selection and classification and are resilient in the presence of noisy features. This regression model consists of an ensemble of decision trees. Each tree in a regression decision forest outputs a Gaussian distribution by way of prediction. An aggregation is performed over the ensemble of trees to find a Gaussian distribution closest to the combined distribution for all trees in the model.