This REAA Zoom class will be on Thursday, March 18 from 6:30-8:30 PM and is approved for 2 hours of continuing education in California (only). Brian Melsheimer and I will provide a walk through of using Microsoft Excel for residential appraisal analysis. Given the limitations of our time in the class, this will be a relatively quick run through of some basic skills including creating a scatter graph, linear trendline analysis to determine time and GLA adjustments, contrasting/grouped pairs analysis, and basic Excel setup tips. We’ll also discuss online tools for appraisers.
The FMHPI is your friend. The index reports monthly change in house prices for national, state, and metropolitan statistical areas (MSAs). The data series starts with January, 1975 and usually runs two to three months behind the current date. The data is available in Excel or text files with seasonal adjustments or non-seasonally adjusted.
As an appraiser, I use the FMHPI in several ways for current work:
Regional market trends. I use this data set if I want to show how house prices are trending in my home regions of Sacramento or Fairfield/Vallejo.
Market areas with little activity. I cover Stonyford, a small community on the eastern edge of the coastal mountains with very few sales. I trend surrounding MSAs to show regional activity and offer this as the best evidence available for market trends in Stonyford.
Market areas with significant non-conformity. I cover the Sacramento Delta area, a place with a mix of small towns and small acreage residential properties plus some high quality mansions. Market trends are often influenced by the mix of sales and frequently are not reliable so I use the Sacramento MSA instead.
Where the FMHPI really shines is with retrospective work. My local MLS has data going back to 1998 for my county. That’s great for the past 23 years but what if a client wants a date of value before then? In the past two years I have had requests for appraisals dated in the 1980s and early 1990s on multiple occasions. Each time I used the FMHPI to determine market trends and calculate time adjustments.
Normally I download the data into Excel and generate the chart I want, discussed below. However, you can use the pre-built chart and table tool. This is the default chart:
FMHPI default chart
You can add multiple states or MSAs for comparison by clicking the blue button:
FMHPI with Sacramento MSA
You can also adjust the time period on the chart by sliding the time frame below:
FMHPI Date Range Adjusted
Once you have the chart you want, you can print or download it as a PNG file to stick in your report or work file:
Print or download a PNG graphics file
This tool is versatile and includes a table view showing percentage change.
However, I normally download the data into my Excel work file and create my own graphs. The FMHPI MSAs Excel workbooks are two sheets with MSAs A-L on Sheet 1 and M-Z on Sheet 2. Column A is the Month with MSAs in columns in alphabetical order like this:
Month on the far left column, MSAs in alphabetical order
Here’s my simple process to deal with the way dates are formatted.
Click the link
Download the Seasonally Adjusted MSAs Index:
Click the link
Open the file and copy the Month Column into your Excel work file plus the columns of data for any MSAs you want to analyze.
Leave a column to the immediate left of the MSAs you want to work with to fill in Excel-readable dates. I usually label it “Date Sold.”
Then fill in the Date Sold column matching the dates in the FMHPI Month Column. 1975M01 is 1/1/75, 1975M02 is 2/1/75, etc. Fortunately, once you have the pattern, you can use Excel to autofill to the bottom:
Copy the columns, leaving space for Date Sold, autofill date sold
You can now use this data in Excel to make line graphs or scatter graphs. If you don’t need all 46 years of data in your chart, graph only what you need. Here’s an example of how I would prepare the Sacramento MSA data for the past two years and past 10 years plus the graphs:
Data copied from original columns for ease of creating graphs
And here are the resulting graphs:
Line Graph Example
I hope you can see the value of this data. Let me know in the comments if you have questions or are already using this data.
Update: Well, this helps but doesn’t completely solve the problem. Tab characters still stay in the fields not converted to numbers or dates. If this doesn’t work, I’ve added a couple of steps that will work involving using a robust text editor, Notepad++.
A recent Rappatoni MLS update broke data export. It appears that a hidden tab character is placed between fields as a delimiter, changing numeric fields into character fields. This causes issues when you try to use a workbook like Don Machholz’s 1004MC because the 1004MC workbook relies on the data types to be correct. The calculator chokes whenever it finds text in a numeric field like list price (or sale price, or DOM, etc.). This only affects Rappatoni system MLSes. Metrolist was not affected (yet) because they run their own code and do not appear to have run the update that broke stuff.
Here’s a quick way to know if your MLS has been affected. Take a look at the Selling Price column. If it lines up on the left, your MLS has the issue. If it lines up on the right, you’re good to go.
Here’s an example with the problem:
Here’s how it should look:
Here’s an easy workaround for the latest version of Excel.
Download your data out of MLS and save it to a csv file.
Open up an Excel workbook.
Click on the cell you want the downloaded data to appear in.
Click on the Data Tab and select From Text/CSV
Select your csv file containing the data you need and hit the Import button
Excel will open a data import wizard. This should show your data with the correct data formats and aligned correctly (numbers aligned to the right, text to the left, dates looking like dates). Click the Transform Data button and you should be good to go.
Older versions may require more work to find the data import wizard. This link shows the process using a slightly older version of Excel. Brings me back to my days as an EDI analyst….
I’m learning from others that this fix doesn’t work universally. If you see this in your Excel file, you still have problems.
Here are additional steps that you can do to fix it. The simple way is to place your cursor in the edit bar to the immediate left of the data you want to keep, hit <Backspace>, then enter. This deletes the hidden Tab character but needs to be repeated throughout the file. Another option is to simply not copy text fields into the 1004MC since they are not required to run the calculator.
If you’re interested in a better solution, download a robust text editor like Notepad++ (link). Use Notepad++ to open the csv file.
Go to View, Show Symbol, Show All Characters
Notepad++ will then show the Tab characters
Select one of the now visible tab characters and type ctl + c to copy the Tab character. In Notepad++, select Search then Replace
Click on the Find what: box and paste the Tab you selected previously. Leave the Replace with: box blank. Click on the Replace All button and then save changes.
Now you should be good to go.
(There are many ways to skin a cat. Another way would be to import your files into an Access database, delete the tabs, then output as an Excel file. If you know how to do this, you don’t need my instructions)
Good luck and contact me directly at firstname.lastname@example.org with questions. Hopefully Rappatoni fixes this soon.
It was my pleasure to speak at the Sacramento chapter of the Real Estate Appraisers Association last night at the Story of Value class with my good friend Ryan Lundquist. I discussed ways to explain markets in residential real estate appraisals and focused on using graphs and was surprised to see that maybe half of the crowd didn’t include graphs in their reports. This post is the first to offer advice and instructions on how to create meaningful graphs for residential appraisers.
Four appraisers out of 50 in the room reported using histograms. The histogram is a great tool for analyzing residential real estate markets that all appraisers should use.
What is a histogram?
For our needs, a histogram is a graph that shows the distribution of one continuous variable. The histogram splits the variable into equal-sized bins and counts the number of occurrences. It works well for important residential real estate variables like gross living area, lot size, age, and sale price.
Bin size is key to creating a useful histogram. Bins too wide loses meaning as your data is clumped together. Bins too narrow spreads the histogram out too much.
The graph above shows the sales of homes in a market area with homes of certain sizes. There is one sale less than 1000 sf and one more than 3500 sf. The most frequent size of home sold recently is around 2000 sf with the bulk of the homes in the 1400 sf to 2000 sf range.
Every report prepared for a lender asks the question “Is the subject conforming?” At a glance, any home sold in this neighborhood with between 1300 sf and 2600 sf is reasonably conforming in size. There are no sales in the 4000 sf to 5000 sf range so any homes in the neighborhood of that size are likely non-conforming. The two extreme sales, at 800 sf and 5000 sf, are unusual and likely non-conforming.
This next histogram examines the frequency of sale price in the market area. The most frequent sale price is in the $240,000 to $280,000 range with $360,000 to $400,000 the second most frequent range. A home in contract at $375,000 is fairly typical. A contract price of $700,000 is very unusual and is indicative of a non-conforming home.
The first two graphs were generated using ggplot2 in RStudio. Here’s an example from Excel 365 showing the year built for homes of sales in a market:
Most homes sold in this market area (Placerville, CA small residential acreage) were built in 1970s and 1980s. A couple of homes were built in the 1800s and there are a couple of newly built homes.
The lender forms used by appraisers ask for similar information in a table format:
Which describes the market better, the two histograms or the table?
Make A Histogram in Excel
Here’s how to use the latest version of Excel to make a histogram. This page has instructions for the latest version and older versions.
Start with your data in an Excel workbook with the top row field names and rows below sales data.
Starting with the field name, select the data to generate the histogram (ctl + shift + down arrow will select all consecutive data down)
Press F11 to insert a graph. Then chose Change Chart Type
Select Histogram then press Ok
You’ve made your first histogram!
However, it’s really ugly. Standard formatting for histograms is to have no space between the bins (columns). To fix that, double left click on one of the bins to activate the Data Series editor. Select the bars to active Series Option
Change the Gap Width to 0%. Notice how the columns come together. If you like having gaps between the columns, set the Gap Width to 6%.
To change the bin width, double left click on the x-axis labels (GLA in this case). Using the Format Axis Axis options, select the Bin Width control and type in what you want. Play with it until you’re happy with the shape of your histogram.
Excel defaulted to a bin width of 370 sf. Below is what the histogram looks like with bin width equal to 100 sf:
Here’s bin width equal to 500:
Here’s bin width equal to 200:
Which one appears most useful to you?
I use histograms to understand some aspect of a market. How big are the homes? When were the homes built? How big are the lots in the neighborhood? What do homes sell for in the market area?
Then consider where the subject fits in the market. Is it bigger than typical? If so, you have support for concluding market value is higher than typical. Is it smaller? Well, now you can show a reason why the price is lower.
Let’s consider the histogram above. The subject is one of the larger homes in the neighborhood but still relatively common in size. I would expect, without knowing anything else, that the subject’s market value is on the higher side for the neighborhood but with a reasonable number of homes larger than the subject. Take a look at the graph below.
What if the subject was one of the largest homes in the neighborhood? The subject’s market value is likely on the upper end of the neighborhood range. Also, there are fewer directly competitive sales, implying market value may be less reliable in this market area than for a smaller home. Now let’s look at an extreme case.
I pity the appraiser asked to appraise a 6500 sf house in this market. However, you do have sales either smaller or larger. Here’s the time to really open your eyes to what is a competitive sale. Throw this graph in your report and your client will immediately see your data difficulty.
After you arrive at market value and as part of your reconciliation, consider using a histogram to support your market value.
“The subject is newer than typical, above average quality custom home on a larger than typical lot. As shown above, the subject’s market value is on the higher side for the greater market area, as expected based on its superior characteristics.”
I hope you agree that histograms can be a powerful tool for appraisers.
Ways to use histograms:
Exploratory analysis to understand characteristics of a market area
Assist in determining reasonable search criteria for sales comparison
Visual representation of the subject’s position in a market area
Support for market value conclusions
I learned about histograms from George Dell. Thanks George. Get smart by taking his Stats, Graphs, and Data Science classes or at the very least, sign up for his blog. More info on George’s website.
Postscript: I am working towards moving away from Excel to using R, the data analysis package. I’ll link to the R code used for the two graphs as a separate post/update soon.
Two recent posts from my friend Jamie Owen at the Cleveland Appraisal Blog plus a planned realtor office visit inspired me to write this. Jamie did a great job blowing up the myth that comparable sales need to be within one mile of the subject in this post. He also tackled geographical competency, or the need to have boots on the ground knowledge about a market in order to credibly value properties in a second post.
Both posts touch on the subject of what is a comparable sale and why should anyone in real estate, or even the general public, care? The quick answer is that “comps” are the basis for how we, both those in the real estate industry and the man on the street, value residential real estate.
Per the Dictionary of Real Estate Appraisal, 3rd Edition, comparables are:
“…similar property sales, rentals, or operating expenses used for comparison in the valuation process; also called comps.“
Comps are used in the Sales Comparison Approach to Value, especially in residential real estate appraisal. All of us, appraisers, real estate agents, and folks considering buying a home, use the theory of substitution to determine the value of a home. What would the typical buyer shopping in that neighborhood buy instead of the subject?
A comparable sale is a sale of a home that the typical buyer of the subject would buy instead of the subject.
Subconsciously, everyone who owns a home compares it to homes in their neighborhood. We learn about a recent sale on our block and place a price on ours based on whether we think it’s better than ours, relatively similar, or inferior. The formal version of this is the sales comparison approach used by appraisers.
We appraisers find the most similar sales, adjust the comparables for differences from the subject, leaving each adjusted comparable sale an indicator of value for the subject. The vast majority of single family residential appraisals in the US rely upon this methodology.
In the context of the sales comparison approach to value, the key is to identify the comps for the subject.
The easiest way to get the value of a single family residence wrong is to get the comps wrong!
Residential real estate, such as a house, a condominium, a home on a small acre lot outside of town, etc., have characteristics (“dimensions”) that serve as descriptions of a specific sale for a specific property. The more similarities between a sale and the subject under consideration, the better a comp. We can go into a deep dive, like George does in his classes; instead, I want to talk about what I do specifically for simple single family residential work in conforming neighborhoods.
Some examples of dimensions and characteristics important to valuing homes include transaction terms (financing, credits, etc.), motivations, location, views, quality, design, condition/age, floor area, and amenities.
Some dimensions/characteristics are more important than others and can vary dramatically in importance depending upon the location. For example, pools are valuable in the Sacramento region but have less value in the Pacific Northwest where the weather is cooler. Basements are common in the Midwest and East Coast but not so here. In the Whisper Creek Subdivision in Arbuckle, CA, a tract of large homes on half acre lots, RV parking is a significant factor unlike other nearby markets. This is why the geographical competency that Jamie discusses is so important. Appraisers with geographical competency understand what characteristics define a true comparable and get the subject’s value right.
Time usually matters except when it doesn’t. If a market is rapidly changing, using the most recent sales can reduce the impact of market change. When a market is relatively stable, time is less important and so using older comparables is reasonable. I downplay time frequently because time is usually the easiest and most reliable adjustment to make.
For a typical tract home in my area, the most important factors are motivations for the purchase or sale, time, location/proximity, and size/floor area. I start with a map search using my neighborhood boundaries and go back 12 months prior to the date of value for closed sales. I exclude from consideration REO sales, short sales, and other transactions where motivations likely had an impact on sale price.
I search for homes a little smaller than the subject because most buyers can make do with a slightly smaller home. Because the typical buyer can accept a larger home than the subject, I set the upper boundary on my floor area range wider than the lower bound. For example, if the subject has 2300 sf of living space, I will search for comparables with 2000 sf to 2800 sf of living space (300 sf smaller to 500 sf larger).
After I set my criteria in the MLS search, I run the search and review the results.
I mentally draw a box around the subject’s important characteristics so I can place it in the competitive market. This is known as bracketing. Reasonably, would the typical buyer consider the sales found suitable substitutes for the subject? Are the sales similar in quality and design? Are there differences in lot size or age? Do I have larger and smaller homes? Do I have homes in similar condition, or inferior and superior? I try to account for every significant characteristic of the subject so I can show, by comparison, the value of the subject by using these comparables.
If I’m comfortable with the sales found, I can start my adjustments analysis. If not, I revise my search criteria and run the search again until I am happy that the sales found reasonably describe the subject.
Once I have my initial candidate comparable sales identified, I dig in and look for most representative comparables of the subject and decide on which sales to research further (view the exterior, contact agents involved in the transaction, etc.). I review outliers, sales outside the normal range, and try to determine why the sales deviate from the norm. I either adjust for the issue or remove the outlier from consideration. The remaining comps, after adjustment, are my indicators of value for the subject.
Comps are usually easy to find in conforming neighborhoods as long as the subject is similar to the rest of the neighborhood. When the subject is unusual, or when there are few sales available and they are all different (“non-conforming”), comparable selection is difficult. The appraisal becomes complex and beyond the scope of this article. I do have tips in my article about appraising complex residential properties.
How do you search for comparables? What are some tips for a real estate agent or new appraiser you can share?
Recently Joseph James Angelo was arrested outside of Sacramento and was accused of being the East Area Rapist. The East Area Rapist terrorized California in the 1970s and committed more than 50 rapes and 12 murders before disappearing more than 30 years ago. My friend Ryan Lundquist started a poll and conversation on his blog: What discount would you expect if the East Area Rapist’s house came on the market?
The results are interesting. Most respondents were in the 0-10% and 10-20% brackets. I was in the 0-10% bracket based on the one time I’ve worked on a similar problem. Several years ago I was completing an appraisal on a house for a purchase in one of my markets and I noticed a weird note in the listing. “Blessed by a deacon.” What the heck did that mean?
I called the listing agent, a friend of mine, and asked her what she meant by that. Turns out there was a murder on the site within the past six months. Would have been nice if she’d let me know when I scheduled the appointment that, oh, by the way, there was a murder at the subject….
I frantically called the lender to warn them that a murder had occurred at the subject in the past six months, that I would need time to analyze this new evidence, and that I needed more money for the report because of the extra due diligence. I called my mentor to get advice on how to deal with this and to see if he had any data (nope). I then searched MLS over the past 10 years but for some reason, listing agents don’t normally advertise “recent murder here” when trying to sell homes so struck out again. No one at the local Realtor meeting could remember any sales of homes after a murder or similar circumstance either. One of my comparables, however, had a death by natural causes within six months of date of sale.
So after a bunch of due diligence, I had jack squat for data. I took a step back. This was an entry tier home at a time where inventory was low in a relatively safe neighborhood where the murder was unlikely to occur again. Three full-price offers were received for the subject and all three potential buyers were aware of the home’s history. Was there a discount because of the murder? My best evidence, the three full-price offers, showed little to no market reaction from the murder. I discussed my research in my report and concluded no market reaction and sent it in. The purchase closed less than a month later.
This is not the exactly same situation as if the East Area Rapist’s house was on the market. First, no reports to date suggest that crimes were committed at the accused’s house while the house I appraised was the site of a murder. Second, the murder at my subject’s property was one off with little news coverage outside of the community where it occurred. The East Area Rapist is notoriously known throughout California, if not the US, especially for those of age at the time of his crimes. A better but not perfect model might be Dorothea Puente, the landlord in Sacramento who murdered at least seven people and buried them in the backyard. Ryan plots the sales of her duplex on his poll results post.
Tony Bizjak, the real estate writer for the Sacramento Bee, liked Ryan’s post enough to turn it into an article and quoted me for the story.
p.s. Randall Bell, PhD, MAI is the national expert on diminution in value and determining crime scene discounts. His book Real Estate Damages is highly recommended. He thinks the discount will be closer to 25% if the home of the East Area Rapist hits the market.
This is the second post in a series describing my journey to move my residential appraisal business workflow from Microsoft Excel to R. Last time out, I made the case for why I’m making the change. This post will be a catalog of the ways I use Excel today to serve as a guide for where I need to go.
I use Excel a lot. Each appraisal I work on, I start a separate Excel file. Us appraisers are required to retain for each report a work file that supports our conclusions and allows for someone else auditing us to understand what we did.
Here’s what I do with Excel today:
Store my Neighborhood Market Data downloaded from MLS. I grab all sales in a competitive market area for at least five years back, sometimes ten years. This goes into a Market worksheet.
Store my Competitive Market Data downloaded from MLS. I grab all potential competitive comparable sales and listings going back at least 12 months but frequently further. This goes into a Comparables worksheet.
Create Neighborhood Price Per Square Foot and/or Sale Price trendlines from the Market data. If I have questions about trends, I’ll also look at changes in floor area over time. I create Charts for each data run. I then spend time formatting and labeling so I can include the charts in my reports.
Create Pivot Table summaries of the Neighborhood Market Data. My normal summary table includes all sales summarized by month in a neighborhood with homes sold, mean Days on Market, Low Price, High Price, Mean Sale Price, and Mean PSF. I use a template and replace my old market data with the new data, then refresh, so right now this is really fast in Excel. However, I can’t do Median summaries in Pivot Tables easily, an issue that I expect to be able to handle in R. For most of my reports, I include this pivot table summary.
I use the pivot table summaries to create a column chart showing 12 month change in mean PSF and/or mean sale price as another tool to understand and report changes in my neighborhoods. This is especially important in seasonal markets like Davis, California, where home selling revolves around the university schedule. I’ll include this chart in every Davis appraisal and in other appraisals where necessary to explain market trends.
I occasionally create histograms to show the shape of a market with regard to one variable (sale price and floor area primarily-great for showing where the subject lies in relation to the rest of the market). I’ve seen an example how easy it is to create a histogram in R. I have high expectations that R will be an improvement over Excel for histograms.
Create PSF and Sale Price scatter graphs of competitive sales. I use the trendline coefficients to determine daily price adjustments for market change. I’ll also look at floor area over time to see if my comparables are changing over time or not to help understand what my market is really doing. I include the scatter graphs in my reports so clients and intended users can understand the subject’s sub-market.
I use pivot tables linked to my comparable sales data for contrasting one variable. For example, I’ll use pivot tables to examine the difference between homes sold with pools and without pools, a significant factor in the Sacramento Valley. I’ll create a table that shows how many comparables sold with pools vs. without pools, the difference in mean sale price, the difference in mean PSF, and to understand my data, mean floor area and mean year built to see if I’m dealing with an apples-to-apples vs. apples-to-oranges comparison. If the homes with pools are relatively similar in size and age as the ones without pools, my adjustment is more likely to be strictly the pool. If homes with pools tend to be bigger than without, I have to consider covariance as part of the explanation for the differences noted. (Covariance is a significant issue in residential real estate markets)
I have a Calculators page that I use for random modeling and calculations I need to do by hand. The most significant calculators I have here that I’ll need to move to R are one I use to do the math to figure out the time adjustments for comparables plus others for modeling lot size adjustments. These should be painless to move over to R.
This is the bulk of what I do with Excel today. As I start to shift this workflow over to R, I plan to go into more detail about the special or not so special challenges I encounter. I also have high expectations that R will inspire me to come up with new ways of analyzing and presenting my data.
Reminder for appraiser readers in particular: R is a tool. Excel is a tool. Most of what I plan to discuss in this series is about changing tools. Occasionally, I’ll talk about modeling decisions (like covariance above). However, all of what I’m doing is rooted in the Stats, Graphs, and Data Sciences classes I’ve taken. You need to understand the theory so you can make informed decisions about your modeling choices.
Take classes from George, he’s very willing to help. https://georgedell.com/
I’ve come to rely heavily on Microsoft Excel over the years to do my work as a residential appraiser. So much so that I teach classes to other appraisers on how to use Excel in their work. However, after taking George Dell’s first R for Appraisers class recently, I’ve decided to completely revise my workflow and replace what I do in Excel as much as possible with R.
What’s R? might be your first question. R is a free data analytics software package used widely for data analysis. Most university economics programs teach with R these days. Here’s the official description (link). You can download a free copy here.
However, if you’re going to use R, you need to use RStudio, the free integrated development environment for R. It provides a way of seeing more and doing more with the basic R programming language and really extends what you can do with R. Free copy and more information here.
Why would someone who has invested heavily in developing skills in Excel move to a brand new software package? Here are my reasons:
Data Analytics vs. Spreadsheets Data Retention-R is software designed for data analytics from the ground up. That’s what appraisers do, relatively specialized data analytics. Spreadsheets were designed to replace paper ledgers. You can scribble all over a sheet. If you’re not careful, you’re very likely to write on your numbers and make a mess. This is a problem if you’re trying to preserve your data in the future, say if you need to maintain workfiles like appraisers are legally required to do. R solves this data retention issue.
Reusable Processes-R is designed from the ground up to be reusable. Drop your data in and get your results. I’ve done a lot to get Excel to work that way for me but I still do a ton of manual processes each time I work on an appraisal. Once I know what I’m doing in R, I’ll have a lot fewer manual processes to deal with. It will be easy for someone else to audit my analysis for appraising.
Superior Analysis-I use pivot tables a lot in Excel. In fact, I teach a class on using pivot tables for appraisers. The big drawback with Excel pivot tables is that I can’t include median values as part of my summaries without a lot of work (coding or buying someone else’s product). This is not an issue in R.
Personal Growth-R gives me an opportunity to learn new ways to analyze data, the underlying job function that has given me the most satisfaction throughout my career. I’m excited to learn new ways to do my job better and I expect big changes once I’ve completed the move to R. Also, if this appraising gig doesn’t work out, with R I’ll have a job skill in demand in other industries.
I plan to document my migration from Excel to R here. I’ll share resources I find useful and will discuss issues I run into. I also plan to describe the benefits and drawbacks. This is mainly for me, and maybe Abdur Abdul-Malik and Bruce Hahn, the other appraisers I know making the same journey. And maybe I can help George Dell come up with ideas on how to spread the word to the rest of the industry. Thanks again George for the inspiration.
I just completed an appraisal of a home for a refinance where the subject had sold eight months prior to date of value. I included the prior sale of the subject as Comp#4 and had two very recent, reasonably similar comparable sales that I included in the sales comparison grid. The subject is located in a suburban neighborhood built out in the 1960s-1970s on a ridge with some homes featuring city light views. Many recent sales feature remodeling or recent updating while others are relatively original.
In most suburban neighborhoods, using sale price per square foot (PSF) is the best way to track market trends because using PSF reduces the impact of changes in floor area in your sales comparable data. Generally speaking, the sale price model is subject to variations in floor area in this case and is not the best model. So, following my normal practice, I used the PSF linear regression model to determine time adjustments to comparables.
The PSF linear regression formula is:
Slope of the Trendline * Subject’s GLA = Daily Time Adjustment
PSF Linear Regression Model
So, in this case:
0.0992 * 1794 sf = $178/day
PSF Model Time Adjustments
Once I applied these time adjustments, my range of value was from $447,000 to $466,000 with an indicated value of $460,000. However, the most recent sale indicated $450,000 at most as did homes in contract competitive with the subject. Something wasn’t right.
So I tried a sale price linear regression model instead.
Sale Price Linear Regression Model:
Slope of the Trendline = Daily Time Adjustment
Sale Price Linear Regression Model
In this case, indicated daily time adjustment is $117/day or
Sale Price Model Time Adjustments
Indicated range of value for the subject ranged from $443,000 to $451,000, in line with the most recent sale and competitive homes in contract.
In this case, the Sale Price Linear Regression Model explained my data better than the PSF model.
I’ll have posts explaining more about time and real estate in the near future and will be teaching a class this July in Sacramento. Or you can learn straight from the national expert, George Dell. Sign up for his website and classes here. Thanks George for sharing with the rest of us.
Why do you think the linear model worked better in this case?