--------------------------------------------------- Unstructured Plain Text Format 1.0.4 --------------------------------------------------- Unstructured Page Header Begin --------------------------------------------------- Unstructured Page Number Block Begin 3 --------------------------------------------------- Unstructured Page Number Block End --------------------------------------------------- Unstructured Page Header End roughly two months after the survey was completed; participants were reminded how they had responded to particular questions and asked to elaborate, with follow-up questions aiming to gain more insight to the contexts and experiences their initial survey responses could only allude to. We close the results section with a brief walkthrough of 5 semi-structured interviews, as these interviews in conjunction with the survey results helped us determine what we believe to be the most useful recommendations for a sustained remote/hybrid approach to game development. --------------------------------------------------- Unstructured Sub-Title Begin Analysis --------------------------------------------------- Unstructured Sub-Title End With regard to analyzing our survey data, we took a mixed-methods approach, as we wanted to see both patterns from the quantitative and qualitative data and how they worked together to craft a fuller picture of varied remote work contexts in which participants were working. In addition to the relevant statistical tests, we also performed word frequency analysis and sentiment analysis on the responses to the first question of the survey, quantifying some of our qualitative results in order to see patterns standard qualitative methods might otherwise miss, though we performed the sentiment analysis by hand rather than using software. We began this process using a grounded theory approach by categorizing responses to our first qualitative survey question above as either positive, negative, or neutral, but soon added a mixed and undetermined category, as some responses such as “half down/half up” appeared both positive and negative, rather than neutral, and others such as “busy” or “intense” were clearly not neutral, but also clearly not positive or negative. Three researchers coded the responses individually and then the team compared results to test for reliability; our codes were the same in 92% of cases, and we then had a collaborative discussion to finalize the remaining 8%. In most cases, our coding differences were between the neutral and undetermined category, as there was often overlap or a case could be made for either interpretation more readily than for the other category combinations. The following table provides an example section of this process: --------------------------------------------------- Unstructured Caption Begin Table 1. Sample Sentiment Coding --------------------------------------------------- Unstructured Caption End --------------------------------------------------- Unstructured Table Begin Response Positive Neutral Negative Mixed Undetermined Fine X Prisoned X Okay X Good X Quieter X Up & Down X --------------------------------------------------- Unstructured Table End In our coding of the qualitative analysis, we categorized responses to each survey question based on common themes we noted while reading responses. For the second question