GDPR & Brexit – Citizen Pain or Gain?
“I think I speak for us all when I say ‘huh’?” Frank Breddor
If you feel like you haven’t been bombarded with updates and headlines around the coming General Data Protection Regulation (GDPR) and the UK’s decision to leave the EU, known widely as Brexit, you’ve had a lucky escape so far.
For me, GDPR is like Brexit in many ways – and don’t worry this is not the political amblings of an armchair politician!
One thing that is for sure is that for those who live in the UK and the EU; we’re in Brexit and the GDPR together, whether we like it or not. Seismic in their own ways, both appear to be wrapped up in masses of speculation, with emotions swirling around each to boot around how they will affect us, personally.
What is more, news and information surrounding them has been like watching a massive vortex of headlines, sprinkled with heavy lashings of speculation and a bit of doom and gloom thrown in for good measure!
And for me, the key similarities between Brexit and the GDPR do not stop there.
The sheer volume of news, spin, axioms and ‘fake news’ surrounding each meaty subject has been from what I’ve seen to date; surpassed only to date by the hype surrounding the Y2k bug!
Not to mention the overall feeling of utter bewilderment around the substance that really matters; the detail.
And understandably, as huge as each topic is, both have raised critical questions.
After all, I too would quite like to know the likely economic shape of the UK post Brexit, and how GDPR will impact on my own data and business come May 2018.
“GDPR is part of the response to the challenge of upholding information rights in the digital age” Elizabeth Denham, CEO of ICO
The fact of the matter is, as our world evolves and becomes more and more digitised, we are holding more and more data.
Whether it’s client, employee or consumer data, we all hold it and we do stuff with it – and because of that, this makes us accountable for it as its processors and managers.
But we of course appreciate that that has always been the case with data, and most of us have comprehensive measures in place to safeguard all their data, to uphold its safety and privacy – haven’t we!?
In the last 12 months, there’s been a score of massive data breaches, including millions of Yahoo, LinkedIn, and MySpace account details. Under GDPR, the ‘destruction, loss, alteration, unauthorised disclosure of, or access to’ people’s data has to be reported to a country’s data protection regulator, in the case of the UK, the ICO, where it could have a detrimental impact on those who it is about.
This can include, but isn’t limited to, fines confidentiality breaches, damage to reputation and more.
So, the GDPR which is superseding the 1995 data protection directive which current UK data law is based on – from a data ‘ownership’ perspective; it’s now all about accountability, transparency and governance.
A bit like the politicians all involved in the UK’s exit from the EU then!?
The public body steering this data protection evolution is the Information Commissioner’s Office (ICO) who have issued key best practices to ensure compliance with the GDPR:
• The accountability principle
• Records of processing activities (documentation)
• Data protection by design and by default
• Data protection impact assessments
• When does a Data Protection Officer need to be appointed under the GDPR?
• Codes of conduct and certification mechanisms
What stands out for me is consent within the GDPR.
This is big!
GDPR mandates that consent be obtained to gather personal data and must be ‘freely given, specific, informed, and unambiguous’, and articulated with ‘clear affirmative action’. That means no more pre-ticked boxes or one liners explaining how you may receive ‘marketing emails or correspondence’ in future if you have provided your email address to download a white paper, for example.
We will have to also demonstrate that consent was given to have and use data. A case from 2016 highlights this ‘catch 22’ perfectly.
Honda Motor Europe sent out over 280k emails asking individuals, “would you like to hear from Honda?”. The emails were sent in good faith to addresses for which they had no opt-in/opt-out information. The ICO fined them £13,000.
I’ve been a software tester for over 15 years, and the fact is we use data to support our test practices.
This data could be anything from thousands to billions of records.
Data is used to test, say, an application in several ways such as to verify that a given set of input to a given function, produces an expected result. Other data may be used to challenge its ability to respond to unusual, exceptional or unexpected input.
Test data may be produced in a focused or systematic way (as is typically the case in domain testing), or by using other, less-focused approaches (as is typically the case in high-volume randomised automated tests). Test data may be produced by the tester, or by a program or function that aids the tester.
Critically for me and for my testing peers; the chance of this data ‘slipping through the net’ is quite high, as test data may be recorded for re-use, or used once and then forgotten. On top of this, the data which provides the most valid test may hold PII (Personal identifiable information) and quite often is taken from production, obfuscated (or at least is/ should be!), version controlled and then used in our test environments. So, how do we then ensure that our test data complies with the ‘right to be forgotten and access to individual data’ guidelines and rights within the GDPR?
How, also can we ensure the security and governance of the PII we hold?
A Plan for all Seasons.
Huge leaps forward have been made in technology which sees environments and code being deployed on demand. Cloud based solutions for environments and automated code deployments are providing an opportunity to test more frequently. Keeping pace with this capability is a struggle from a test perspective and here in-lies the bottleneck and quite often why most Continuous Deployment and DevOps solutions struggle or fail. Automated testing is only a part of the continuous testing solution!
How we hold, manage, use and keep our test Data secure is the most overlooked part of a leaner, faster automated deployment solution.
It can take weeks or even months to refresh test data which has been used for the previous test run.
This is a big problem!
On top of this, we are now faced with the GDPR challenge of managing our test data in a way that remains compliant.
No longer is it going to be acceptable to use production data, obfuscate it and use it for testing. If this is carried out inadequately then your organisation and you personally may be subject to a hefty fine under the GDPR.
For me, the safest option is to generate synthetic test data which can be version controlled and used in-line with your test cases.
Now that’s all great and good, however, this can take months each time the test data requires an update.
Imagine there’s a solution that can;
• Automatically generate synthetic test data
• Including Input and Reference test data
• Assist you to decide what to test based on the test data generated
• Ensure that input and reference test data matches
• Provides a repeatable solution for Test Coverage and Data coverage at the touch of a button
No imaginings needed– there is now Orson.
Orson provides an opportunity to calculate your optimal test sets, ensuring that you are efficient in targeting the correct things, and has phenomenal functionality which allows you to generate, coverage, test data requirements, test data, tests and expected results on demand. Saving huge amounts of time from repeating all those tasks involved in setting up your testing assets for execution.
It provides test teams with the opportunity to keep pace with Continuous Delivery practices and become an efficient quality engine to compliment the dev teams.
Technology and how fast it moves and evolves is always impactful. The same could be said for vast constitutional and governing shifts. No-one really knows yet the full impact Brexit or the GDPR will have.
However, as a software tester, I am pretty sure I know the impact Orson will have on testing as we know it. In this case; it will be no pain, all gain.
The Importance of Data Quality in Banking and Finance
Data quality is a key measure of business success for all organizations, particularly those within the financial services sector. Ensuring...Read more
Data Quality Trends and Their Outlook
The top three data quality developments in data management today are modern data hubs, data warehouse modernization and machine learning...Read more