51st State: Net Energy Metering

51st State

Last Thursday, I had the honored opportunity to partake in a discussion on the future of the electric grid. The 51st State initiative put on by the Smart Electric Power Association (SEPA) is meant to serve as a platform to re-imagine electricity generation, distribution, and use. The idea is to envision a completely new 51st State that would be unencumbered by existing structures. This year, the focus was on creating a roadmap to that future state. The submission included a paper, graphic, and poster. I made a submission along with 13 other authors for which you can see our work on the SEPA 51st State site. On Thursday, these authors, along with other industry players, gathered for a discussion in Denver. It consisted of panels, table break-outs, presentations, and lots of lively, back-and-forth discussion. From all the talk, there were three take-aways that particularly resonated with me for their ingenuity, sensibility, and recurrence. Those were:

  1. Net Energy Metering (NEM) being a good subsidy tool in the short-run.
  2. Rate Rider as being a good way to fix remuneration for the lifetime of a Distributed Energy Resource (DER).
  3. The idea that defining value will be very important for any change to the rate structure.

I’ll go over these three in ideas in separate posts, starting with Net Energy Metering.

 


 

Net Energy Metering (NEM) refers to the concept of selling electricity you produce into the grid. The net from what you bought reduces your electric bill. The typical scenario is a solar rooftop selling electricity during the day, then buying what they need when the sun isn’t shining. In effect, they use the grid as a battery.

NEM is a very controversial issue between utilities and solar advocates. It is very beneficial to solar producers while hurting the utility’s revenue. The user only pays for the net of what they use versus produce, even though they are using the connection all the time. This decreases overall revenue for the utility making it harder for them to cover their operations and maintenance cost which stay the same. It allows for users to make back a little more money.

My original thought on this is that it was a fair and good thing. You should be able to sell electricity just as easily as buying it, and at the same price. Shouldn’t the same product at the same location always be worth the same amount? NEM gives an added boost to the economics of solar energy. It seemed that the big utilities were trying to stick it to the little guys.

Very quickly, it’s easy to see how unsustainable it is. If everyone put the same amount of energy into the grid as they took out, then everyone’s bill would be zero. No one would pay anything and the utility would make no money, even though everyone was fully using the grid. Even at reasonably small percentages of penetration, it is an unwise idea. The non-solar producers have to pick up the costs by the amount that the solar producers reduce their bill. This creates an unfair subsidy from non-solar producers to solar producers. A new way of accommodating residential solar on the grid is required.

My next thought was to advocate for a fixed cost for using the grid. A monthly charge for the privilege of being hooked-up to the grid would ensure that the costs of maintaining the grid are covered. This seems logical because the costs associated with maintenance are fixed and independent of how much energy is used. In my paper, I call it a grid access fee, but essentially, would be a monthly charge based on the size of the connection. A fixed cost is already being implemented by some utilities. I still strongly advocate for this fixed fee because of the sustainability of the idea. In addition, it more closely matches the charges with the costs. However, it doesn’t actively promote solar.

The thing about current solar subsidies are that they’re being done inefficiently. Solar Investment Tax Credits are available, but most solar companies have difficulty making use of them because they aren’t making money, anyway. There are state’s that have Renewable Portfolio Standards (RFP) which essentially demand that a certain percentage of generation come from a specified source. This doesn’t let the market determine the best way to accomplish the goal of clean energy. Feed-In Tariffs (FIT) guarantee of payment for a specific clean power project. It also has the trouble of not letting the market decide what’s best in accomplishing outlined goals. The ideal way to promote clean power would be from a carbon tax, but this is unlikely with today’s political sentiment.

The apt point that was made during the conference was to think of NEM in terms of the short-term vs. long-term. Long-term, NEM won’t work. However, in the short-term, it is a very efficient subsidy for new solar generation. NEM is technology agnostic which means that solar doesn’t have to be used, but whatever works best. The boost to the economics will support its growth. This will lead to an increased number of clean power generators and decreased costs with technology developments and economies of scale.

Intermittency is one of the biggest problems with solar. The sun doesn’t shine all the time and can be unpredictable as to when it shines. The grid helps it overcome this problem. NEM allows the grid to be used as storage. This is a huge boon that is being provided for free with NEM. This benefit comes at a low cost at low penetrations. At small amounts, there isn’t much revenue being lost for the utility, and this storage service can be provided without a huge shock to the system.

NEM needs to be thought about in time periods. In the long-term it is an unsustainable rate structure. In the short-term, it is an efficient subsidy that can promote solar.

Advertisements

Home Battery Optimization: Power Arbitrage

Storage of energy is one of the major hurdles for the electric grid. Electricity is best managed by instantaneous generation and consumption. Economic storage of energy could allow for cleaner generation and decreased peak accommodation. In this post, I investigate the optimization of storage independent from renewable generation.

The first case is trading electricity using a battery and grid connection by buying or selling electricity at hourly prices. The second case incorporates a houses’ load. In each case, the goal will be to optimize when the battery should be charging or discharging.

It is important to note that the price and load data is deterministic, as if future price and load is already known from a prediction model. In actuality, the data comes from Comed’s load and price sites. The data is hourly for the first week of January 2016. Load data is measured in kWh and price data is in dollars per kWh.

Load_and_Price

The battery is a 7 kWh Tesla Powerwall. The optimization model ensures that the battery can’t be charged above 6.4 kWh max based on Tesla’s specifications, nor below the minimum of zero. Besides the battery size, another restriction is a maximum power flow of 3.3kW based on Tesla’s website.

 

powerwall-battery-group@2x
2 Tesla Powerwalls

 

The optimization is done with scipy package’s optimize. Minimize is the function that was used. This means that the function being optimized needs to return a negative dollar value because we actually are trying to solve a maximize optimization problem.

The first scenario is optimizing when power is bought or sold from the grid and stored as needed in the battery. The goal is to maximize the revenue made from buying electricity when its cheap and selling electricity when the price is high. It’s a classic financial scenario of making money by buying low and selling high. The model states that we are able to make $1.62 over the course of the week. The results come in the form of an hour by series of how much power to buy or sell. Graphed, we can see the results of the state of the battery as compared to the price of electricity.

Price_and_Battery_State_No_Load

The second scenario incorporates the residential load in kWh for a house. The previous assumptions are the same, except that now, the system also needs to provide for the electricity needs of a house. This ultimately costs -$3.57 which makes sense because you’re now using the electricity you acquire, and not able to sell it.

Price_and_Battery_State_Load

There are certainly many considerations that were not taken into account such as the lost efficiency of the battery. The model was a bit oversimplified as you wouldn’t be able to perfectly predict the price and load for a residence’s electricity. All in all, the main goal of using python’s optimization function was accomplished. We also learned something about what money there is to be made using Tesla’s Powerwall to arbitrage power prices. If you can make $1.62 over the course of a week, and a Powerwall costs $3,000; you can turn a profit in 1,852 weeks or 36 years. No get rich quick scheme here.

Python code on Github.

 

 

 

LCOCE: Levelized Cost of Constant Energy

heartbeat-304130

The Levelized Cost of Constant Energy (LCOCE) is my proposed metric for measuring the price of electricity. It is a way to measure the real world cost of providing energy 24/7. The measurement is defined as the total lifetime cost of providing a constant supply of energy 24/7. It is able to overcome the shortfalls of many other types of project evaluation metrics.

The most common way of economically measuring energy projects is the Levelized Cost of Electricity (LCOE). This is a well accepted way to measure the cost of an energy project. It takes the total lifetime costs and divides them by the total lifetime energy generated. If used appropriately taking into account its limitations, it’s a decent way to compare alternative energy projects. However, it is in no way applicable to energy storage systems, and doesn’t take into account the imposed system costs of a new project. LCOE is recognized to have serious deficiencies.

The financial advisory firm Lazard presents the Levelized Cost of Storage (LCOS) to measure the costs of energy storage systems. Once again, it takes into account the total costs over the lifetime of a system and divides them by the total lifetime energy able to be stored based on cycling and capacity. The study is careful to only come up with metrics for specific and realistic battery use-case combinations. While this is a very thorough way to compare storage capabilities against other storage capabilities, it provides no insight into evaluating these systems against generation.  Conventional generation is always an alternative to storage. A more universal approach is needed to better compare real alternatives.

Including the costs of integrating the technology into the existing system is very important in understanding the true costs of a proposed project. This has been done before for electricity generation. The EIA has come up with the Levelized Avoided Cost of Energy (LACE) measures the cost of what the proposed project would offset in the system it is being implemented in. It is always coupled with LCOE to provide a more complete picture of the system cost for a proposed project. Even this LCOE-LACE combination has very little ability to measure the impact of a project that includes storage technology.

The Levelized Cost of Constant Energy (LCOCE) provides a complete system picture of cost that can be inclusive of storage technologies. It measures the total lifetime cost of a system providing a constant energy supply 24/7. This matches with the real demands of a grid to have to supply electricity 24/7. It takes into account the variability costs of variable production by pricing in the cost of making up for this variability. It is universal in ensuring it includes system cost, and it is indiscriminate in being able to incorporate any technology.

An example case is a solar panel battery combination to provide for the constant energy needs of a house. A solar panel provides power to the house and to a battery when the sun is shining. The battery has stored enough energy to provide power to the house when the sun is not shining. This is the ideal distributed energy system. This set-up can be derived and calculated as:

1
LCOCE = \frac{Lifetime Cost}{Lifetime Energy}

2
LCOCE = \frac{Solar Panel Cost + Battery Cost}{Lifetime Energy}

3
LCOCE = \frac{\begin{pmatrix} \textrm{Solar Price of Power} * \textrm{Total Solar Power} + \\ \textrm{Battery Price of Energy} * \textrm{Battery Storage} \end{pmatrix} }{\textrm{Lifetime Energy}}

4
LCOCE = \frac{\begin{bmatrix} \textrm{Solar Price of Power} * ( \textrm{Solar Power Immediate Use} + \textrm{Solar Power to Battery}) + \\ \textrm{Battery Price of Energy} * \textrm{Battery Storage} \end{bmatrix} }{\textrm{Lifetime Energy}}

5
LCOCE = \frac{\begin{Bmatrix} \textrm{Solar Price of Power} * [ \textrm{Solar Power Immediate Use} + \frac{\textrm{Battery Storage}}{\textrm{Hours of Sun}}] + \\ \textrm{Battery Price of Energy} * \textrm{Battery Storage} \end{Bmatrix} }{\textrm{Lifetime Energy}}

6
LCOCE = \frac{\begin{Bmatrix} \textrm{Solar Price of Power} * [ \textrm{Solar Power Immediate Use} + \frac{\textrm{Energy per Day}*(1-\textrm{\% Sun})}{24 * \textrm{\% Sun}}] + \\ \textrm{Battery Price of Energy} * \textrm{Energy per Day} * (1-\textrm{\% Sun}) \end{Bmatrix} }{\textrm{Lifetime Energy}}

7
LCOCE = \frac{\begin{Bmatrix} \textrm{Solar Price of Power} * [ \textrm{Constant Power} + \frac{(\textrm{Constant Power} * 24) *(1-\textrm{\% Sun})/\textrm{Efficiency Factor}}{24 * \textrm{\% Sun}}] + \\ \textrm{Battery Price of Energy} * (\textrm{Constant Power} * 24) * (1-\textrm{\% Sun})/\textrm{Efficiency Factor} \end{Bmatrix} }{\textrm{Constant Power}*\textrm{Hours}*\textrm{Days}*\textrm{Years}}

8
LCOCE = \frac{\begin{Bmatrix} \textrm{\$5,000 /kW} * [ 1 \textrm{ kW} + \frac{(1\textrm{ kW} * 24\textrm{ h}) *(1-20\%)/.7}{24 * 20\%}] + \\ \$1,000\textrm{ /kWh} * (1\textrm{ kW} * 24) * (1-20\%)/.7 \end{Bmatrix} }{.001\textrm{ MW }*24\textrm{ h }*350\textrm{ days }*20\textrm{ years }} = \$363.10\textrm{ per MWh}

Most of the assumptions are taken from Lazard’s LCOE Report or LCOS Report. One of the biggest assumptions in this calculation is that power demand is a constant 1 kW. Power needs fluctuate throughout the day, so this was done for simplification. The formula could be modified for different use cases. For example you could use the average cost of generation for a system as the compensation cost for when the sun doesn’t shine instead of using battery cost. There are certainly additional factors that could be taken into account and different ways of calculating, but this gives you an idea of what is trying to be accomplished.

The LCOCE is a way to measure the costs of providing electricity within the context of real world needs. It is measured as the cost of providing 24/7 electricity. The metric is able to take into account system costs and is able to handle energy storage technologies.

 

You can see a spreadsheet of my calculations on my Github account.

Fight in Florida

The fight in Florida is the beginning of what will be a growing riff between utilities and advocates of solar energy. To frame the present situation in Florida, it is currently illegal in most of the state to sell solar to a third party. If you sold electricity to a third party, you would be considered a utility and subject to the rules of a utility. This becomes an issue when you look at the very popular business model for solar companies to install and own panels on a home and then sell the electricity to the residence. The homeowner doesn’t have to stake the upfront capital costs, but can still reap the rewards of having solar. Utilities are against this because it propagates solar use which cuts into their revenues. While, there are ways around this issue, such as PWRStation’s program of leasing or renting portable units, it’s a huge blow to the proliferation of solar in Florida.

The Floridians for Solar Choice effort is trying to change this. It is an effort primarily backed by the Southern Alliance for Clean Energy and other environmental, clean energy, and social organizations. They have sponsored the Florida Right to Produce and Sell Solar Initiative Amendment. The Amendment limits the ability of the government or utilities to impose barriers on the sale of solar electricity to customers at the same or contiguous site as long as it’s under 2 MW per day. This would not only allow solar companies to own solar installations on people’s homes, but allow for other forms of solar financial models such as community cooperatives.

The opposition to this is the Consumers for Smart Solar backed by the utility companies. They have counteracted by proposing a bill of their own. It is called the Florida Right to Solar Energy Choice Amendment. It states that it ensures the right to produce solar for your own use. This you can already do, so in essence, it changes nothing. You will still not be able to sell solar in Florida. The more important part of this campaign is to confuse voters in an effort to torpedo the effort by Floridians for Solar Choice. It is meant to protect utilities from the spread of solar systems.

The latest on this is that the Floridians for Solar Choice effort will most likely fall short of the 639,149 signatures required by Feb. 1, 2015 to get on the November 2016 ballot. As of Dec. 25, 2015, Floridians for Solar Choice has 271,000 signatures. The signatures are good for 24 months, so they can continue the effort for the 2018 ballot. Moreover, what’s more troubling is the big money lobbying and disinformation campaign put on by the utility industry. $5.9 million was raised by Consumers for Smart Solar (the utilities) versus $1.9 million by the Floridians for Solar Choice effort. The utilities were clearly successful in their confusion campaign creating enough mix-up to derail the original effort.

The fight in Florida perfectly exemplifies the issues that solar and distributed Energy in general will face. A new energy structure needs to be molded that takes into account solar and renewable’s unique characteristics. Solar’s unique financial business models need to be allowable and utilities need to be properly compensated. There are lots factors to consider which will play out in very opaque ways, as exemplified by the fight in Florida.

Sources:

Home Energy: Cyclical Components

We’re going to try a different way of trying to predict the Energy Production of a home solar panel. The technique I want to look at uses the seasonal_decompose function in Statsmodel python package.

However, to start it will help to see how the different houses compare across the months. I graphed each house’s energy production using the same x-axis of dates. The median is shown as the thick, red line.

 

As you can see, every house seems to have a similar change in each month at its own level. However, it’s tough to get a trend out of this that would be useful in predicting future months.

Let’s attempt to break it down, though. We can use StatModel’s seasonal_decompose function to try and get a trend, seasonal, and residual component. Passing the median timeseries into the function is necessary because we can only give it one time series. The median will be less influenced by outliers than the mean because it only takes the middle value and not the weighted average. Although, in this dataset, it doesn’t seem to be much of an issue because of the lack of large outliers. Put simply, the seasonal_decomposition function uses a convolution filter to take out the trend. A convolution filter is a type of weighted average which not only looks at previous values, but also subsequent. After taking out the trend, it finds the seasonal pattern. This is essentially the averages for the particular periods. For example, the averages of all the Julys, the averages of all the Augusts, etc… What’s then left is the residual. The last component of what would make up the data.

The reason there is trend information lacking for 6 months on the ends is that the convolution filter uses an average of 6 months before and after the position. It runs out of data on either side. The useful information that pops out of these plots is the cyclical pattern of the seasonal information. EnergyProduction seems to be dependent on the seasons. This is similar to what we say when we broke the data down by months. Let’s see if we can make use of this.

I combined all the trend and seasonality components into one model. This model contains a sine function for cyclicality and a linear component for overall trend. I used the least square function from scipy to optimmize the parameters in the model function. This function requires initial guesses for which I used: constant = mean, linear component’s slope = slope of linearly fitted line, amplitude = 3 * standard deviation / sqrt(2),  phase = pi/6. These seemed to be the most appropriate guesses that resulted in the best parameter estimates by the least square optimization.

The red line of the model shows a pretty good fit to the data, visually. A sinuisoidal component with a slight upward trend. Let’s see how it does on test data in predicting new data.

Very similar looking to the training data. However, this is not surprising since even the test data has 500 data points. The MAPE score comes out to be 19.1. This is a quite a bit worse than our previous best prediction of 12.48. most likely due to the fact that we are only predicting using the time component and not the additional factors of Temperature and Daylight that the previous model included. Still, makes for a good exercise for manipulating timeseries, breaking down components of the data, and fitting cyclical data.

References:
-GitHub Code: https://github.com/262globe/Blog.git
http://stackoverflow.com/questions/26470570/seasonal-decomposition-of-time-series-by-loess-with-python
https://searchcode.com/codesearch/view/86129185/
http://statsmodels.sourceforge.net/devel/generated/statsmodels.tsa.filters.filtertools.convolution_filter.html
http://www.cs.cornell.edu/courses/cs1114/2013sp/sections/s06_convolution.pdf
http://stackoverflow.com/questions/16716302/how-do-i-fit-a-sine-curve-to-my-data-with-pylab-and-numpy

Home Energy: Linear Regression

I’m going to start out by trying out some linear regression. The hope is that this will allow us to input new data and output the Energy Production. Should be straightforward. The first model regresses Energy Production on Temperature, the second regresses Energy Production on Daylight, and the third regresses Energy Production on both Temperature and Daylight. We’ll start with these and see what we get.

Energy Production on Temperature gives us a model which is not very good. The good news is that the p value is very low at 2.32e-195. The means the model is a good predictor. Technically, it tells us to reject the null hypothesis that the coefficient is 0. However, R-Squared is very low at .07, meaning a high variance in the prediction. This is bad. It doesn’t seem to be very useful because of this.
The Daylight vs Energy Production seems to be slightly better. It also has a very low p value (0), but a slightly higher R Squared (.28). This means that more of the variance can be explained by the model.

So, what happens when we combine both Temperature and Daylight? We get a p value of 0.0, once again. But a marginally improved R-squared value of .37 as compared to the .28 for the Daylight data.

coef std err t P>|t| 95% Conf. Int.
Intercept 40.3892 7.144 5.654 0.000 26.386 54.392
Temperature 5.0511 0.124 40.856 0.000 4.809 5.293
Daylight 2.6425 0.036 74.090 0.000 2.573 2.712

The next thing I tried was to break the data down by month before doing the regression. The hope being that we could better predict EnergyProduction when we take into account what month it is. So, each month has a separate linear model that it uses. Only Daylight is used to predict. The plot shows different color dots and regression lines for each month. The p values are once again very low. The R Squared values vary to a good degree month to month from a low of .004 in September to .46 in August. This seems to suggest that there could be a large spread in what our model predicts in certain months, while being a much tighter error in other months.

How good the model succeeds will be based on how well it does on new data. The measurement we will use with this will be the MAPE (Mean Absolute Percentage Error). I already broke the data down into training and test. The models were created with the training data and will be measured with the test data. Let’s compare the regression model with Temperature and Daylight vs. the model breaking it down into months.

Model   MAPE Score
Temperature and Daylight Regression 15.06
Daylight Regression by Month 12.88

And just for fun, I expanded the month by month regression to include both Temperature and Daylight. This gave a very small improvement.

Model   MAPE Score
Temperature and Daylight by Month 12.48

Creating a separate model for each month that includes both the Temperature and Daylight factors gives the best predictions. Although, it’s only a marginal improvement on the Daylight model for each month, I’ll stick with it, because it is the best and the cost to implement is low enough. Let’s see if we can improve on this score on future analysis!

Home Energy: Testing and Measuring of Prediction Models

Before we continue, it is important to clarify a couple prediction model concepts. This will allow us to test our models.

This is a side post on how we will evaluate the performance of the models.

We will break the data down into 2 sets. The first set is what will be used to train our model and give it the parameters. This will include the first 80% of the entries. Think of this as teaching the model. The 2nd set will test the model to see how well it performs on new data. This will subsist of the last 20% of the data.

The performance of the models will be compared using Mean Absolute Percentage Error (MAPE). MAPE is calculated by taking the mean of the standardized absolute errors and turning it to a percentage.

http://support.minitab.com/en-us/minitab/17/png/measures_of_accuracy.dita_dctm_Chron0900045780196e20_0.png

This will give us a standardized measurement with which to compare two models. This score will be calculated on the new test data. MAPE percentage error is an intuitive way of understanding the error statistic. As Minitab’s online resource explains, a MAPE score of 5 means that the forecast is off by 5% on average. Other options to measure the error of models would have been R-Squared, Mean Absolute Deviation, or Mean Squared Deviation. I won’t go into the details of these except to explain that R-Squared is the percentage of the response variable variation explained by the model on a scale of 0%-100%. In other words, it is a deviation measure between the model and actual values over a deviation measure between the mean and actual values.
In summary, in order to improve something we have to measure it. So, we want to make sure we have a system in place to systematically measure and compare our models.