Methodology
Ethics
Ethics approval for the study was granted by the National Research Ethics Service Committee for Yorkshire and the Humber on 23 March 2017.
Datasets
The study is based on four datasets:
- Antenatal and postnatal surveys of, respectively, women’s birth expectations and preferences and their birth experiences
- Audio and video recordings of labour
- Quantitative coding of interactional data (from the recordings)
- Interviews with midwives and obstetricians
With permission from the authors and the University of Leeds, we adapted two self-completion questionnaires from the Great and Greater Expectations studies. These were refined with input from our service-user groups. Participants could complete surveys online through Qualtrics or by mail.
Antenatal Questionnaire
This questionnaire collected demographic information and established women’s expectations of labor and birth through closed questions covering:
- Interactions with healthcare practitioners and birth planning
- Wishes and expectations about labour and birth
- Pain management
- Preferences about procedures during labour and birth
- Anticipated experiences during the third stage
Postnatal Questionnaire
This follow-up survey assessed women’s satisfaction with labor and birth, and examined the gap between their desired and actual involvement in decision-making. The questions covered:
- Experiences with medical staff
- What happened during labour (pain, birth partners, complications, interventions, positions, pushing, delivery of the baby and placenta,)
- Overall labour experience
- Satisfaction with the birth experience
Response rates for both questionnaires were high.
Each participant received a Smots™ mobile camera from Scotia UK, which recorded high-quality video and audio during labour. Data were recorded to a securely located bespoke encrypted laptop over a secure intranet connection and later transferred to a secure university server.
Women and their birth partners had complete control over the camera, including its positioning, format and the ability to turn it on or off at any time. Healthcare providers could only turn off the camera with the woman’s consent, except in medical emergencies.
We collected 37 recordings in total, 24 video and 13 audio-only. Recordings featured 43 birth partners and 74 healthcare professionals (primarily midwives, but also student midwives and obstetricians). The length of recordings ranged from 8 minutes to over 15 hours, with an average of 4 and a half hours. Recordings typically captured established labour through delivery of the placenta, though some began earlier or ended immediately after birth.
In creating our coding scheme to generate quantitative data, we were committed to reducing the data for analysis without sacrificing sensitivity to the interactions. While informed by existing formal coding frameworks, our scheme was primarily developed through a bottom-up process drawing on Conversation Analysis (CA) to capture what was truly happening in the interactions.
Inspired by previous research using online questionnaires to extract quantitative data, our coding scheme included:
- A comprehensive codebook
- An online data extraction form built with Qualtrics software
The coding scheme was refined through several steps:
- Extensive data familiarisation by three team members watching and listening to a sample of recordings
- Development through trial and error, creating definitions of interactional practices derived from CA
- Multiple iterations with frequent team meetings to discuss the process and resolve disagreements
- Creation of a final scheme that identified interactional phenomena while allowing reliable classification between coders
What we coded
For each recording, we coded:
- Labour stages when recording started and ended, along with format (audio-only, video, or mixed)
- All decisions about 12 key aspects of care (e.g., pain relief, foetal monitoring, vaginal examinations)
- Who initiated each decision (midwife, doctor, woman, or birth partner)
- Decision-points within each decision, classified into 11 types (e.g., pronouncements, recommendations, requests)
- Immediate recipient responses to each decision-point (e.g., acknowledges, agrees, disagrees)
- The transcribed text comprising each decision-point
- Whether the course of action was pursued, agreed and acted upon, agreed but not acted on, or abandoned
- The level of ‘sharedness’ and ‘balance’ in decision-making on a seven-point scale
Measuring decision-making balance
We assessed the level of ‘sharedness’ and ‘balance’ using seven categories:
- Unilateral healthcare professional (HCP)
- HCP-led but birth-party had some say
- HCP-led but birth-party had most say
- Equal balance between HCP and birth-party
- Birth party-led but HCP had most say
- Birth party-led but HCP had some say
- Unilateral birth-party
Reliability Testing
We tested inter-coder reliability using Cohen’s Kappa on independently produced codings:
- Outstanding agreement for initiating format (Kappas between 0.84 and 0.92)
- Substantial agreement for response format and sharedness (Kappas between 0.50 and 0.91)
- Moderate agreement for whether the course of action was pursued (Kappas between 0.26 and 0.69)
Higher reliability was achieved between the two project conversation analysts, indicating that CA experience was advantageous.
Resulting Dataset
Our process created a multilevel hierarchical dataset with:
- Decision-point-level variables: who initiated, who responded, initiating format, response given
- Decision-level variables: type of decision, number of decision points, whether action was taken, balance in decision-making
- Labour-level variables: frequency of decision-points, proportion of different decisional practices, mean balance and sharedness scores
Interviews were conducted to provide background about the institutional context of, and healthcare professionals’ approaches to, decision-making. The interviewees were a purposive sample of 7 midwives and 3 obstetricians at each site (14 midwives and 6 obstetricians in total) covering a range of grades and experience.
Interviews lasted approximately 45 minutes and took place in a pre-booked room at each site. They were professionally transcribed verbatim by a third-party and transcripts were anonymised (healthcare professionals were given a pseudonym based on their site location, professional role and participation number).
The interviewees and video/audio recorded healthcare professionals were not purposively sampled to overlap. None of the obstetricians and only three of the midwives interviewed took part in the recordings of labour.