There's so many things I could mention, but here's a few key takeaways - they're not exactly how we do score leads, but highlight common areas for optimisation/avoiding issues:
- Scoring isn't a marketing only project. Sales and data teams can shed enormously useful insight on what specific behaviours and demographics genuinely show a higher level of engagement or probability to convert. Equally, a lead score can be an enormously useful piece of information for them to utilise, but only if they actually understand what it is and how it's calculated. So, don't black box it.
- Tokens are a great way to allow a quick overview of the score change associated with specific behaviours/values, and also make it easy to update those score changes across multiple campaigns.
- Keep it simple to start with. Don't create too many variations, like individual form score campaigns for each and every individual form. Combine into one wherever it is logical to do so (e.g. one campaign for all content form fill outs), and keep your scoring within the scoring program, rather than scattered throughout your instance. Start simple, then start to scale when you know it's working well.
- Run some scenario validations. Create a few fictional examples on paper of people who you would perceive as meeting MQL criteria - apply your scoring rules, and see if they would actually qualify. Equally, vice versa - see if people would qualify when they shouldn't, see if some rules are doubling up on each other unintentionally. It's a simple step but often missed, and it always picks up on something.
- Restrict availability. Don't let people get 5 points every time they click on any email, and allow them to run through that campaign every time - you'll get tripped with bots picking up 30 clicks in 1 second. It's often a good idea to allow leads to run through these sorts of score campaigns once every hour, or put in place some other restrictions on these fronts.
There's always more, but those are some of the most important ones that jump to mind!