Trace:
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
my_research [2015/12/06 14:36] – silvia | my_research [2022/01/11 10:42] (current) – silvia | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | **Research Project** | ||
- | **Oral Presentation: {{:silviaradulescu_amlap2015_ruleinduction.pdf|Input Complexity | + | |
+ | **November 11, 2021 - article publication in Frontiers in Psychology Journal (peer-reviewed)** | ||
+ | |||
+ | **Paper: Silvia Radulescu, Areti Kotsolakou, Frank Wijnen, Sergey Avrutin and Ileana Grama (2021) [[https://www.frontiersin.org/ | ||
+ | |||
+ | *please note this is copyrighted material, do not share, copy or distribute in any way without specific written permission from the author — // | ||
+ | |||
+ | ---- | ||
+ | |||
+ | **Doctoral Dissertation** | ||
+ | |||
+ | **Silvia Radulescu (2021) [[https:// | ||
+ | |||
+ | *please note this is copyrighted material, do not share, copy or distribute in any way without specific written permission from the author — // | ||
+ | |||
+ | ---- | ||
+ | |||
+ | **Paper: [[https:// | ||
\\ | \\ | ||
- | *please note this is copyrighted material, do not share, copy or distribute in any way without specific written permission from the author | + | *please note this is copyrighted material, do not share, copy or distribute in any way without specific written permission from the author |
- | \\ | + | |
- | \\ | + | |
- | // | + | |
- | \\ | + | |
- | \\ | + | |
- | INPUT COMPLEXITY AND RULE INDUCTION. AN ENTROPY MODEL | + | |
- | \\ | + | |
- | \\ | + | |
- | Silvia Rădulescu (Utrecht University), | + | |
- | \\ | + | |
- | S.Radulescu@uu.nl | + | |
- | In language acquisition, | + | ---- |
+ | |||
+ | **Talk: {{: | ||
\\ | \\ | ||
+ | {{: | ||
\\ | \\ | ||
- | It was argued that children detect patterns in auditory input, like phonotactic information (Chambers, Onishi & Fisher, 2003), and word boundaries (Saffran, Aslin & Newport, 1996) by statistical learning. Statistical learning deals with computing probabilities that specific items co-occur in the input, and it cannot account for abstractions beyond those items. Previous studies (Gómez & Gerken, 2000) drew a distinction between abstractions based on specific items (e.g. ba follows ba) and category-based abstractions (generalizing beyond specific items, e.g. Noun-Verb constructions). An algebraic system was proposed (Marcus, Vijayan, Rao & Vishton, 1999) to account for extracting rules that apply to categories, such as “the first item is the same as the third item” (li_na_li). This system addresses abstractions to novel items, but it does not explain how humans tune into such algebraic rules, and what the factors (if any) in the input are that facilitate | + | *please note this is copyrighted material, do not share, copy or distribute |
\\ | \\ | ||
+ | **Poster - Abstract: {{: | ||
\\ | \\ | ||
- | In our first experiment we exposed adults to 3-syllable AAB strings that implemented a miniature artificial grammar to probe the effect of input complexity on rule induction. We manipulated two factors (number of syllables and their frequency) and we used entropy (a function of the two factors) as a measure of complexity (calculated in bits), to design three experimental conditions: low entropy - 3.5 bits (4×6 As/4×6 Bs), medium entropy – 4 bits (2×12 As/2×12 Bs), and high entropy – 4.58 bits (1×24 As/1×24 Bs). Participants gave grammaticality judgments on 4 types of test strings: grammatical trained AAB strings, grammatical AAB strings with new syllables, ungrammatical new A1A2B strings (three different syllables), and ungrammatical A1A2B strings with trained syllables. In a second experiment we exposed adults to a similar AAB grammar, but the three conditions had other degrees of entropy: 2.8 bits (4×7 As/4×7 Bs), 4.25 bits (2×14 As/2×14 Bs), and 4.8 bits (1×28 As/1×28 Bs). Participants were tested on the same types of test strings as in the first experiment. As predicted, the results of the first experiment showed that the higher the input complexity, the higher the tendency to abstract away from specific | + | *//please note this is copyrighted material, do not share, copy or distribute |
- | Unlike previous findings, this model also gives a quantitative measure for the likelihood of making generalizations in different ranges of input complexity. To further test our model and its domain generality, similar studies will be run with infants, and also using visual input. | + | |
\\ | \\ | ||
+ | //**Poster - Abstract: {{: | ||
+ | *//please note this is copyrighted material, do not share, copy or distribute in any way without specific written permission from the author - Silvia Radulescu// | ||
\\ | \\ | ||
- | *//please note this is copyrighted material, do not share, copy or distribute in any way without specific written permission from the author | + | //[[: |
\\ | \\ | ||
+ | [[http:// | ||
\\ | \\ | ||
- | **[[abstract|Abstract]]** / **{{:poster_amlap_2014_radulescu_wijnen_avrutin.pdf|Poster}}**: **Limits and Variations of Linguistic Generalizations** | + | [[http://www.uccs.edu/ |
\\ | \\ | ||
- | *//please note this is copyrighted material, do not share, copy or distribute in any way without specific written permission from the author - Silvia Radulescu//— //[[sil.radulescu@gmail.com|Silvia Rădulescu]] 2014/12/27 14:40//— | + | [[https://www.polyu.edu.hk/mm/effectsizefaqs/calculator/calculator.html|Another Effect Size Calculator]]\\ |
\\ | \\ | ||
- | \\ | + | [[https://effectsizefaq.com/|Effect Size FAQs]] // |
- | [[shannon_entropy_calculator|Shannon Entropy Calculator]] | + | |
- | \\ | + | |
- | \\ | + | |
- | [[http://www.wolframalpha.com/|Wolfram Alpha]] | + | |
- | \\ | + | |
- | \\ | + | |
- | [[http://www.uccs.edu/ | + | |