Draft Performance Analysis - Games Played Basis

Disclaimer: I‘m an analysis kind of guy. 
 
Have been thinking about how to evaluate a clubs drafting performance. I think the general feeling on Blitz is that our drafting used to be terrible, but is better now. I thought I‘d crunch some numbers and see if I could make come up with some sort of metric. 
 
The first problem is that it is actually very difficult to evaluate relative value of players. 
 
I've used games played as a crude measure. It‘s far from perfect but it‘s a useful starting point and it‘s something that is analysable. (The most obvious issue with this is that a player drafted to a weaker team will likely play more games earlier than one drafted to a stronger team. Also, how do you weigh up between two players, with one who is better but more injury prone than the other?)
 
I‘ve grabbed the national draft results from each year since 2006. I‘ve then re ranked them based on games played, and then given a relative value to each draft pick based on games that player played. 
 
Ie: 
Player x is the 1st  draft pick but has played 17th most games in the draft: value = -16
Player y is the 35th draft pick but has played 6th most games in the draft: value = +29
 
Over the time period, the best to worst performed clubs in the draft in terms of value for money on the basis of games played are:
 
Adelaide
Collingwood
Carlton
Western Bulldogs
Fremantle
North Melbourne
St Kilda
Sydney
Hawthorn
Brisbane
Essendon
Port Adelaide
Richmond
Geelong
West Coast
Melbourne
(Note: I‘ve excluded GWS and Gold Coast based on short draft history.)
 
To reiterate, this is only a measure of how well a club picked players relative to the draft position based on games played. It is not a measure that club x has drafted better players overall than club y. 
 
This doesn‘t mean that Adelaide have picked the best players through the draft, just that on average, the Adelaide picks have resulted in better value for money games wise than other clubs. If they have the 10th pick, they are most likely to get the 10th or better value player in terms of games played. Over this time period Adelaide have been exceptional, never being worse than the 8th best team. 
 
Like all forms of analysis, viewed in isolation it doesn't mean a lot. 
 
Melbourne is ranked as the poorest which aligns with gut feel that they have made a lot of poor early picks. They have had 14 top 20 picks in that timeframe and only one of them has played more games than his ranking position. (Frawley, pick 12, 8th most games played of that year). Considering they have been a poor performing club over this period makes this even worse given a draftee taken by a poor club can expect to play more initially than one taken by a strong club. 
 
Hawthorn and Sydney are only mid table in terms of draft value. My thoughts on this are that their players drafted have come into stronger teams and thus not played as much initially. Note that they have both had great success with trading. Hawthorn‘s team for their first final had Gibson, Lake, Spangher, Gunston, Hale, Burgoyne and McEvoy – 7 players who came via trades. 
 
Geelong are low on this measure. I think because they have been extremely strong over the period meaning players have tended to spend a long time in the VFL before playing seniors. 
 
Our relative position over the years is:
 
2006  14
2007  9
2008  10
2009  14
2010  11
2011  9
2012  3
2013  7
 
Clearly we could be getting better value from our picks, but the last few years have been an improvement. This aligns with my impressions of the general feeling on Blitz. 
 
2006 was bad for us with Gumby and Hislop both early picks that didn't play many games. By contrast, 2012 was good for us with Baguely representing great value – he was the 106th player picked and has played 4th most games of that year group. 

Nice effort!

Like your work Goaloss.

As you say, there are problems with using games as a defacto measure of quality. I also think the ranking method isn’t ideal, as who really expects picks 40 to 80 to be an order. Teams aren’t picking the same guys at this point, so 80 could just as easily be 40.

How are you counting rookie upgrades? At their draft position, or their upgrade position?

I also think that you can’t use recent drafts in any analysis of drafting. For me I’d finish the analysis at 2006. It’s only now that we can even judge that draft. (obviously my thinking doesn’t help with working out if we’ve actually improved, or whether it’s just a vibe. And obviously we have no idea as to whether it’s our drafting, or development that may have improved anyway!)

But anyway those are minor niggles, that’s an awesome effort, well done. I wonder if Hawthorn and Sydney are mid table because they haven’t had high draft picks. I suspect high draft picks will artificially lead to those teams who have them being ranked lower in this method, as there will always be lower picks that get opportunities and take them. If you had only the last 5 picks in every draft they’d all get opportunities and you’d smash this measure!

What do you do for guys like Scully or other picks who didn't work out at their first club?

Nice work, Goalloss!

 

@Frosty:

"If you had only the last 5 picks in every draft they'd all get opportunities and you'd smash this measure! "

 

EFC should do better out of the 2013-4 drafts, then, since we are excluded from the first two rounds.

Our 2009 draft (Carlisle Colyer Melksham Long Howlett Hardingham Crameri) is equal with our 2006 draft (Gumby Houli Jetta Reimers Hislop Davey) ?

K.

Thank you for taking so much trouble - this was of great interest.

Out of interest could you chuck up the workings somewhere?

Would be interested to have a look.

Thanks for your efforts. A very interesting read. It must have taken some time to put that together.
Of course it’s very difficult to quantify a qualitative thing like performance. It’s always going to problematic. But I like your approach to this problem.

Like your work Goaloss.
As you say, there are problems with using games as a defacto measure of quality. I also think the ranking method isn't ideal, as who really expects picks 40 to 80 to be an order. Teams aren't picking the same guys at this point, so 80 could just as easily be 40.
How are you counting rookie upgrades? At their draft position, or their upgrade position?
I also think that you can't use recent drafts in any analysis of drafting. For me I'd finish the analysis at 2006. It's only now that we can even judge that draft. (obviously my thinking doesn't help with working out if we've actually improved, or whether it's just a vibe. And obviously we have no idea as to whether it's our drafting, or development that may have improved anyway!)
But anyway those are minor niggles, that's an awesome effort, well done. I wonder if Hawthorn and Sydney are mid table because they haven't had high draft picks. I suspect high draft picks will artificially lead to those teams who have them being ranked lower in this method, as there will always be lower picks that get opportunities and take them. If you had only the last 5 picks in every draft they'd all get opportunities and you'd smash this measure!

 

Rookie and PSD draft I ignored except where they were upgraded to the main list through the ND in which case they were counted as a normal draft pick. 

 
Agree on development vs drafting. For mine, the former is much much more important than the latter. I chose 2006 as it was the Gumby year. I guess the further back in time, the greater the influence player development has on performance percentage wise. 
 
When I started this I expected to see the last 10-20 pics all not playing any games. I was quite surprised to see how many late picks played games.
 
Agree with only having the last 5 picks statement. But if you do have the last 5 picks and you pick up AFL standard players then you're doing pretty well with those picks!

What do you do for guys like Scully or other picks who didn't work out at their first club?

 

I left them as being counted at their first club. My logic was the fact that GWS offered him a truckload of money doesn't affect whether he was a good value draft pick by MFC. 

 

Our 2009 draft (Carlisle Colyer Melksham Long Howlett Hardingham Crameri) is equal with our 2006 draft (Gumby Houli Jetta Reimers Hislop Davey) ?

K.

 

Think you've missed what I'm trying to convey. 

 

Consider this scenario. 

 

You have pick 1 and pick 50. Both players picked are of similar quality and play 50 games. Pick 50 was clearly better value than pick 1. That‘s what I‘m trying to illustrate.

 

The fact that in 2006 and 2009 our drafting accuracy ranking was the same doesn't mean we got the same quality players each year.

 

Further example:

 

You have pick 1 and pick 50. Pick 1 plays the 5th most games of any player in that draft. Pick 50 plays the 55th most games. Their relative accuracy of each of these picks is the same – both players were close to where they should have been drafted on a games played basis.  Clearly pick 1 is the better player, but the recruiters performed equally well for both picks. 

 

Out of interest could you chuck up the workings somewhere?

Would be interested to have a look.

 

Shoot me a PM with your email address. :)

 

Our 2009 draft (24 Carlisle 26 Colyer 10 Melksham 33 Long  Howlett Hardingham Crameri) is equal with our 2006 draft (2 Gumby late 30s Houli 18 Jetta 50s Reimers 20 Hislop 40s Davey) ?

K.

 

Think you've missed what I'm trying to convey. 

 

Consider this scenario. 

 

You have pick 1 and pick 50. Both players picked are of similar quality and play 50 games. Pick 50 was clearly better value than pick 1. That‘s what I‘m trying to illustrate.

 

The fact that in 2006 and 2009 our drafting accuracy ranking was the same doesn't mean we got the same quality players each year.

 

Further example:

 

You have pick 1 and pick 50. Pick 1 plays the 5th most games of any player in that draft. Pick 50 plays the 55th most games. Their relative accuracy of each of these picks is the same – both players were close to where they should have been drafted on a games played basis.  Clearly pick 1 is the better player, but the recruiters performed equally well for both picks. 

 

 

I'm pretty sure I grasped it, but I think a lot will depend on the assumptions you take in (rookies in or out, whether you count rookie upgrades, whether you only count 'live' picks or the actual order with passes etc)

 

2006: Gumby, Hislop would've been huge losses. Davey a clear win. Reimers & Jetta break even or slightly ahead? The only rookie who played a game was Rama.

 

In 2009 Melksham would've been at worst, a break even; Carlisle wouldn't be far off 24th, Colyer slightly behind the 8 ball. Long a dead loss, but Howlett/Crameri huge wins, Hardie a moderate win & Muscles Marigliani probably a win as well.

 

PM sent

 

 

Our 2009 draft (24 Carlisle 26 Colyer 10 Melksham 33 Long  Howlett Hardingham Crameri) is equal with our 2006 draft (2 Gumby late 30s Houli 18 Jetta 50s Reimers 20 Hislop 40s Davey) ?

K.

 

Think you've missed what I'm trying to convey. 

 

Consider this scenario. 

 

You have pick 1 and pick 50. Both players picked are of similar quality and play 50 games. Pick 50 was clearly better value than pick 1. That‘s what I‘m trying to illustrate.

 

The fact that in 2006 and 2009 our drafting accuracy ranking was the same doesn't mean we got the same quality players each year.

 

Further example:

 

You have pick 1 and pick 50. Pick 1 plays the 5th most games of any player in that draft. Pick 50 plays the 55th most games. Their relative accuracy of each of these picks is the same – both players were close to where they should have been drafted on a games played basis.  Clearly pick 1 is the better player, but the recruiters performed equally well for both picks. 

 

 

I'm pretty sure I grasped it, but I think a lot will depend on the assumptions you take in (rookies in or out, whether you count rookie upgrades, whether you only count 'live' picks or the actual order with passes etc)

 

2006: Gumby, Hislop would've been huge losses. Davey a clear win. Reimers & Jetta break even or slightly ahead? The only rookie who played a game was Rama.

 

In 2009 Melksham would've been at worst, a break even; Carlisle wouldn't be far off 24th, Colyer slightly behind the 8 ball. Long a dead loss, but Howlett/Crameri huge wins, Hardie a moderate win & Muscles Marigliani probably a win as well.

 

PM sent

 

 

I removed passes from the analysis. So, pick 72 with 4 passes before was considered the 68th player picked. 

 

Rookies - I only included them if they were elevated onto the list through the ND. 
 

You've pretty much nailed it with the above except I've looked at ND only, not PSD. So Hardie doesn't appear in the analysis as he was a PSD selection and whilst Muscles played for us as a promotion he never came through the National Draft so he doesn't appear.  I think there is further scope for Rookie and PSD analysis, although I suspect it may be harder to draw conclusions from these due to the gentlemens agreement style nature of PSD and the more speculative nature of Rookie draft picks.  

 

I suspect we will look good out of a similar piece of analysis of Rookie drafts. I'll bash something together for a future post. 

Yeah if I had the time I'd enter everything inc. PSD & Rookie picks. That would change the 2009 numbers significantly.

And rookie upgrades, to my mind they basically should be counted as a pass, with the original numbers coming from when they came onto that club's list, ie their rookie pick in the year/s before. Whether you upgrade them with pick 30 or pick 130 is irrelevant as those players are not available to be picked.

In real terms, to take the same example you only rank the 2009 batch players down to 57 with Long equal 57th for games played. In reality there are ~50 others who played 1 or more games who came into the system that year, so he's tied 107th for games played at pick 33 = -64.

Crameri has played 79 games (~ 15th) at pick 127 = +112

 

Zone & international picks are another curiosity as they are effectively only open to one club - in that way they're similar to rookie upgrades. I don't think they'll effect the overall picture too much, as only a handful are taken every year and I thnk it's since been canned. Tuohy, Hanley & Claye Beams head a fairly short list.

 

Now go do all that! ;)

Yeah if I had the time I'd enter everything inc. PSD & Rookie picks. That would change the 2009 numbers significantly.

And rookie upgrades, to my mind they basically should be counted as a pass, with the original numbers coming from when they came onto that club's list, ie their rookie pick in the year/s before. Whether you upgrade them with pick 30 or pick 130 is irrelevant as those players are not available to be picked.

In real terms, to take the same example you only rank the 2009 batch players down to 57 with Long equal 57th for games played. In reality there are ~50 others who played 1 or more games who came into the system that year, so he's tied 107th for games played at pick 33 = -64.

Crameri has played 79 games (~ 15th) at pick 127 = +112

 

Zone & international picks are another curiosity as they are effectively only open to one club - in that way they're similar to rookie upgrades. I don't think they'll effect the overall picture too much, as only a handful are taken every year and I thnk it's since been canned. Tuohy, Hanley & Claye Beams head a fairly short list.

 

Now go do all that! ;)

 
Rookie Upgrades – I considered leaving them out. Like you say, they are a player that is not able to be picked by another club. However, I figured that the club making that player selection deserved to be evaluated on that selection. I considered this more important than the effect on other clubs results of not being able to pick that player. If I was to do the analysis with Rookie and ND together then I‘d remove the elevation to avoid the player being counted twice. So Crameri‘s rookieing in 2009 would counted to the results but his 2011 elevation onto ND wouldn‘t. 
 
I‘ve always felt that some clubs use the ND and Rookie drafts a little differently – historically more speculative in the Rookie drafts. But I think this is changing as more and more players come through the Rookie draft. Or atleast that‘s just my gut feel. 

I think some clubs took longer to cotton on, yes. But it took some clubs (cough, Essendon) to cotton onto the draft as a whole. Running a measure like this over it all might expose that.

this is interesting analysis, and thanks for doing it. However, purely ranking them on the games versus draft position is (in my opinion) only tells part of the story. It doesn't give any consideration to quality of those games. What is more, under your methodology the only way that the number 1 draft pick can be the best value in any draft is if they play the most games. 

 

If I am applying your logic correctly, here are the results from the 2011 draft

 

Ellis (Rich)

D. Smith (GWS)

Green (GWS)

Adams (GSW/Coll)

C. Smith (WB)

Wingard (PA)

McKenzie (Nth)

Tomlinson (GWS)

Docherty (Bris/Carl)

Sheridan (Frem)

Kav (Ess)

Coniglio (GWS)

Tyson (GWS/Melb)

Haynes (GWS)

Longer (Bris)

Sumner (GWS)

Patton (GWS)

Hoskin Elliot (GWS)

Buntine (GWS)

 

It almost looks like a random number generator could have produced those results. Wingard should be a clear #1 (IMHO) and there is no way that Kav (Career total 7 games) is ahead of Hoskin-Elliot, Coniglio, Patton or anyone else.

 

The only objective measure of impact on games (and therefore quality, although it is debatable) is SuperCoach scores or something similar. So if you looked at total career supercoach points that would give some depth to the games played ranking.  

 

Not sure how you would do it, but if you gave weighting to AA selections (BIG weighting) or awards, that would help you more. 

 

By the way, how did you capture Jaeger O'Meara?

 

Just my thoughts

Whilst that’s true it’s a hell of a lot more data. Over the long run, games played will give some indication. In the short term no, and GWS complicates things as anyone half decent on their list is playing every game, not much competition for spots.

I’d group players into bands. As mentioned above, the guys taking the highest picks are on a hiding to nothing. One injury to a #1 -> #5 pick and they’ll fall down the rankings. Now, that is partly fair since you won’t have got as much out of them, but even over a career there will be imbalances.
I’d lump guys in 10 band groupings, and only use their draft pick within that band to differentiate them.
I’d also exclude rookies, since it is unfair since the club will only promote them if they think they’re good, and they’ve already been drafted once.
But good work.

this is interesting analysis, and thanks for doing it. However, purely ranking them on the games versus draft position is (in my opinion) only tells part of the story. It doesn't give any consideration to quality of those games. What is more, under your methodology the only way that the number 1 draft pick can be the best value in any draft is if they play the most games. 

 

If I am applying your logic correctly, here are the results from the 2011 draft

 

Ellis (Rich)

D. Smith (GWS)

Green (GWS)

Adams (GSW/Coll)

C. Smith (WB)

Wingard (PA)

McKenzie (Nth)

Tomlinson (GWS)

Docherty (Bris/Carl)

Sheridan (Frem)

Kav (Ess)

Coniglio (GWS)

Tyson (GWS/Melb)

Haynes (GWS)

Longer (Bris)

Sumner (GWS)

Patton (GWS)

Hoskin Elliot (GWS)

Buntine (GWS)

 

It almost looks like a random number generator could have produced those results. Wingard should be a clear #1 (IMHO) and there is no way that Kav (Career total 7 games) is ahead of Hoskin-Elliot, Coniglio, Patton or anyone else.

 

The only objective measure of impact on games (and therefore quality, although it is debatable) is SuperCoach scores or something similar. So if you looked at total career supercoach points that would give some depth to the games played ranking.  

 

Not sure how you would do it, but if you gave weighting to AA selections (BIG weighting) or awards, that would help you more. 

 

By the way, how did you capture Jaeger O'Meara?

 

Just my thoughts

 

How is Kav ahead of Coniglio?