Artwork

Content provided by Parley Services. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Parley Services or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

ABA Data, Scientifc Rigor - A Stitch in Time

 
Share
 

Archived series ("HTTP Redirect" status)

Replaced by: Parley Services

When? This feed was archived on June 21, 2018 07:55 (6y ago). Last successful fetch was on February 24, 2018 13:41 (6y ago)

Why? HTTP Redirect status. The feed permanently redirected to another series.

What now? If you were subscribed to this series when it was replaced, you will now be subscribed to the replacement series. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 156472794 series 1190697
Content provided by Parley Services. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Parley Services or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

ABA Data, Scientific Rigor - A Stitch in Time

Podcast 8 - A Stitch in Time

ABA technical concepts covered in this podcast: Data collection; measurement; determinism; empiricism; topography; operationalize; frequency; intensity; duration; range; mean; latency; philosophic doubt; functional equivalence; replacement behaviour; social positive reinforcement; extinction; positive punishment; feedback reinforcement; scientific rigor; inter-observer agreement; pattern analysis; validity; normative vs single-subject data; baseline; reliable indicators; functional equivalence; variability; fidelity; maintenance; generalization.
Presenters - Bobbi Hoadley, Cathy Knights.
The ups and downs of data and statistics are touted as the best consumer tool in our digital economy, and certainly for behaviour change.
A stitch in time saves nine. If you identify it quickly, or fix right away the problem is smaller.
In behaviour analysis, a lot of us are all about the data. The word data can strike fear in someone's heart – it can be very intimidating. The whole idea of taking data, puts me in the category of being a nerd.
But we all use it. For example, buying a car - I knew I wanted certain things such as a colour, engine type, etc. I did research. You can go online and get all kinds of data about anything – the criteria are going to be different between you and I. Any intelligent person who wants to make a good decision will collect data.
There are some behaviour analysts that focus more on the data than the person – we need to do a better job of teaching the humanistic side of our technology.
In order to get good data, you need to be a person who clears your head of bias. All behaviour is quantifiable. For example, count frequency and intensity of behaviour.
First thing we do is to describe behaviour and ask questions that move people into clear descriptions – tell me what you see, not what you think. A topography is a description that is operationalized, e.g. he trips, he pushes, and he'll hit someone from behind.
Then I'll ask how often does it occur? When you're focused on bad behaviour you anticipate it all the time. I'll ask, does it happen 3 times an hour? Does it happen 3 times a day? Or 3 times a week? What's the most and the least it could possibly happen?
Then we want to know the intensity - often I'll get people to show me, does it leave a bruise? Show me how hard that hit was. Often it's not that aggressive or violent. I can usually tell by the way they're talking about it how much violence is in there. If I can sense from them a lot of anxiety and fear, then I know there is probably a lot of violence to it. If it's a push, do they fall over, cause an accident? Sometimes we'll find that some behaviours termed as "aggressive" or "violent" are actually little warnings that people need to pay attention to and say "Oh I'm sorry you're overwhelmed, I'll get out of your space". Then we teach the person to use words instead of that push. How much impact on the environment and how much does the environment change? How long does it last, the duration?
In a recent case, we had someone violent towards everyone in the facility. They would have very brief outbursts of swearing that lasted 30 seconds, so we got them to start saying as soon as he recovered-good control. We ended up preventing all the greater, more challenging behaviours. What they were actually doing was reinforcing his anger management – getting control. They were lumping that in with truly violent things. We took the high level ones and said there was zero tolerance with a natural consequence of calling 911 right away. For the low level behaviours, he could be cued to get control or withdraw.
When I get staff to collect data, we get staff to look for productive behaviour. We are building in the reinforcement of the alternate behaviours and focusing staff on how to do that. We'll give them an excel graph and a checkmark sheet to say if a behaviour happened. This data becomes feedback. So often, we're not measuring behaviour, we are pathologizing. One of my hopes is that when we decide to certify someone, holding someone against their will, that there will be data attached to it. When we learn about data, we learn how to keep it so that it's scientifically sound that we could publish it. We learn to design research studies around it, and prove we have reliable inter-observer agreement. Not easy to get in a group home or large facility. People who think a behaviour is all the time, don't see when it's happening more or less.
Latency is the measure of delay between stimulus and response - sometimes used when a person has a cognitive deficit.
We also follow-up with staff and compare data taking to charting notes. Sometimes they are over and under-estimating. Do it in the moment. A common mistake is if you have really great rapport, you assume that your client is a certain way all the time. They are responding individually to the environment. How do we know a problem behaviour is truly a problem behaviour? Is there a pattern, occurring over time, over settings, over people? We work on retainer with some clients because they are so reactive to their environment that it has to be monitored in order to create stability in the person.
How do I change my own behaviour? Can I take my own data? Any marriage counsellor outlaws the words "always and never". Understand that if something really bothers you, maybe try to quantify it. Nothing more reassuring to know it's not occurring as often as you think. Reinforce the alternate behaviour better. Quantify the problem to understand the behaviour.
What is the difference between research and data taking on a single subject? The kind of normative data that drives science is based on probabilities in a normative population. Sometimes people who believe in this will see single subject data as anecdotal, which it is not. Single subject data is really valuable. Normative data tends to speak to normative population and probability. All of the research in medicine is based on normative data. The use of some medication often aren't based on data from the populations they are most often used for. College males provide a lot of the normative data.
People understand that science isn't foolproof. Some things like our environment, well in some ways it could be considered single-subject data because we only have one world. The best thing we can do is measure you against yourself to see change.
We create a baseline. We don't always have the luxury of observing the behaviour. We read incident reports, interview people who watched the incident. If we can't figure out where the baseline is, because people in environment can't provide data on it, I'll get preliminary data. We measure from that baseline when we give people tools and strategies. We train the supporter to know what to see to make sure we're getting fidelity to the plan. Or we do something else.
All it takes is 3 data points to know something is not working. We accept some variability in that data. We might lose the data at another point, so we have to follow the evidence we see. Often we're spending our time setting the stage for personal development. With private clients we have enough funding to usually see behaviour learned and the person does not go back to the challenging behaviour.
Interventions are specific to the function of that person's behaviours. Sometimes we fail if we don't have a buy in. If people are resistant to what we're saying, sometimes they are just not ready.
We should all demand that any kind of therapeutic intervention shows results and gives data on how things are getting better, be it a practitioner or a medication. We can better judge if we are being served or not.

  continue reading

17 episodes

Artwork
iconShare
 

Archived series ("HTTP Redirect" status)

Replaced by: Parley Services

When? This feed was archived on June 21, 2018 07:55 (6y ago). Last successful fetch was on February 24, 2018 13:41 (6y ago)

Why? HTTP Redirect status. The feed permanently redirected to another series.

What now? If you were subscribed to this series when it was replaced, you will now be subscribed to the replacement series. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 156472794 series 1190697
Content provided by Parley Services. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Parley Services or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

ABA Data, Scientific Rigor - A Stitch in Time

Podcast 8 - A Stitch in Time

ABA technical concepts covered in this podcast: Data collection; measurement; determinism; empiricism; topography; operationalize; frequency; intensity; duration; range; mean; latency; philosophic doubt; functional equivalence; replacement behaviour; social positive reinforcement; extinction; positive punishment; feedback reinforcement; scientific rigor; inter-observer agreement; pattern analysis; validity; normative vs single-subject data; baseline; reliable indicators; functional equivalence; variability; fidelity; maintenance; generalization.
Presenters - Bobbi Hoadley, Cathy Knights.
The ups and downs of data and statistics are touted as the best consumer tool in our digital economy, and certainly for behaviour change.
A stitch in time saves nine. If you identify it quickly, or fix right away the problem is smaller.
In behaviour analysis, a lot of us are all about the data. The word data can strike fear in someone's heart – it can be very intimidating. The whole idea of taking data, puts me in the category of being a nerd.
But we all use it. For example, buying a car - I knew I wanted certain things such as a colour, engine type, etc. I did research. You can go online and get all kinds of data about anything – the criteria are going to be different between you and I. Any intelligent person who wants to make a good decision will collect data.
There are some behaviour analysts that focus more on the data than the person – we need to do a better job of teaching the humanistic side of our technology.
In order to get good data, you need to be a person who clears your head of bias. All behaviour is quantifiable. For example, count frequency and intensity of behaviour.
First thing we do is to describe behaviour and ask questions that move people into clear descriptions – tell me what you see, not what you think. A topography is a description that is operationalized, e.g. he trips, he pushes, and he'll hit someone from behind.
Then I'll ask how often does it occur? When you're focused on bad behaviour you anticipate it all the time. I'll ask, does it happen 3 times an hour? Does it happen 3 times a day? Or 3 times a week? What's the most and the least it could possibly happen?
Then we want to know the intensity - often I'll get people to show me, does it leave a bruise? Show me how hard that hit was. Often it's not that aggressive or violent. I can usually tell by the way they're talking about it how much violence is in there. If I can sense from them a lot of anxiety and fear, then I know there is probably a lot of violence to it. If it's a push, do they fall over, cause an accident? Sometimes we'll find that some behaviours termed as "aggressive" or "violent" are actually little warnings that people need to pay attention to and say "Oh I'm sorry you're overwhelmed, I'll get out of your space". Then we teach the person to use words instead of that push. How much impact on the environment and how much does the environment change? How long does it last, the duration?
In a recent case, we had someone violent towards everyone in the facility. They would have very brief outbursts of swearing that lasted 30 seconds, so we got them to start saying as soon as he recovered-good control. We ended up preventing all the greater, more challenging behaviours. What they were actually doing was reinforcing his anger management – getting control. They were lumping that in with truly violent things. We took the high level ones and said there was zero tolerance with a natural consequence of calling 911 right away. For the low level behaviours, he could be cued to get control or withdraw.
When I get staff to collect data, we get staff to look for productive behaviour. We are building in the reinforcement of the alternate behaviours and focusing staff on how to do that. We'll give them an excel graph and a checkmark sheet to say if a behaviour happened. This data becomes feedback. So often, we're not measuring behaviour, we are pathologizing. One of my hopes is that when we decide to certify someone, holding someone against their will, that there will be data attached to it. When we learn about data, we learn how to keep it so that it's scientifically sound that we could publish it. We learn to design research studies around it, and prove we have reliable inter-observer agreement. Not easy to get in a group home or large facility. People who think a behaviour is all the time, don't see when it's happening more or less.
Latency is the measure of delay between stimulus and response - sometimes used when a person has a cognitive deficit.
We also follow-up with staff and compare data taking to charting notes. Sometimes they are over and under-estimating. Do it in the moment. A common mistake is if you have really great rapport, you assume that your client is a certain way all the time. They are responding individually to the environment. How do we know a problem behaviour is truly a problem behaviour? Is there a pattern, occurring over time, over settings, over people? We work on retainer with some clients because they are so reactive to their environment that it has to be monitored in order to create stability in the person.
How do I change my own behaviour? Can I take my own data? Any marriage counsellor outlaws the words "always and never". Understand that if something really bothers you, maybe try to quantify it. Nothing more reassuring to know it's not occurring as often as you think. Reinforce the alternate behaviour better. Quantify the problem to understand the behaviour.
What is the difference between research and data taking on a single subject? The kind of normative data that drives science is based on probabilities in a normative population. Sometimes people who believe in this will see single subject data as anecdotal, which it is not. Single subject data is really valuable. Normative data tends to speak to normative population and probability. All of the research in medicine is based on normative data. The use of some medication often aren't based on data from the populations they are most often used for. College males provide a lot of the normative data.
People understand that science isn't foolproof. Some things like our environment, well in some ways it could be considered single-subject data because we only have one world. The best thing we can do is measure you against yourself to see change.
We create a baseline. We don't always have the luxury of observing the behaviour. We read incident reports, interview people who watched the incident. If we can't figure out where the baseline is, because people in environment can't provide data on it, I'll get preliminary data. We measure from that baseline when we give people tools and strategies. We train the supporter to know what to see to make sure we're getting fidelity to the plan. Or we do something else.
All it takes is 3 data points to know something is not working. We accept some variability in that data. We might lose the data at another point, so we have to follow the evidence we see. Often we're spending our time setting the stage for personal development. With private clients we have enough funding to usually see behaviour learned and the person does not go back to the challenging behaviour.
Interventions are specific to the function of that person's behaviours. Sometimes we fail if we don't have a buy in. If people are resistant to what we're saying, sometimes they are just not ready.
We should all demand that any kind of therapeutic intervention shows results and gives data on how things are getting better, be it a practitioner or a medication. We can better judge if we are being served or not.

  continue reading

17 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide