Trace: • research_files • thermodynamics
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
gerken [2015/11/22 22:22] – silvia | gerken [2015/11/22 22:51] (current) – silvia | ||
---|---|---|---|
Line 25: | Line 25: | ||
different formal systems make no generalization at all. Or, infants generalize based on the | different formal systems make no generalization at all. Or, infants generalize based on the | ||
formal description that is more likely to have generated the input. | formal description that is more likely to have generated the input. | ||
+ | \\ | ||
+ | These data, coupled with infants’ failure to discriminate under the same | ||
+ | familiarization conditions in Exp. 1, suggest that infants in the column condition made | ||
+ | only the generalization involving the position of the syllable //di//. | ||
+ | \\ | ||
+ | ---- | ||
+ | **Conclusions: | ||
+ | \\ | ||
+ | A question raised by the experiments | ||
+ | is **what caused infants in the column condition to generalize based on the location of | ||
+ | //di// rather than making the more abstract generalization**? | ||
+ | \\ | ||
+ | One possibility is that the data are consistent with the **Subset Principle** (Manzini & | ||
+ | Wexler, 1987), in which learners select among possible parameter values based on which | ||
+ | value generates the smallest language compatible with the input data. Note that, if we | ||
+ | interpret ‘language’ to mean ‘set of sentences, | ||
+ | sentences for each parameter value and determine which value generated fewer sentences | ||
+ | (but see Wexler, 1993). Wexler and Manzini reduce the computational task for the learner | ||
+ | by placing relevant parameters in a markedness hierarchy, in which the learner begins with | ||
+ | the least marked value, which generates the smallest language. | ||
+ | \\ | ||
+ | Another possibility is consistent with the **Bayesian approaches** to generalization | ||
+ | (e.g., Tenenbaum & Griffiths, 2001), in which learners compare the subset of the input | ||
+ | they have received to the range of input generated by different formal descriptions. For | ||
+ | example, an infant might tacitly compute that it is extremely unlikely, given an AAB | ||
+ | grammar, the only input ends in di. Depending on its implementation, | ||
+ | also be computationally challenging. However, it has the advantage of applying to a more | ||
+ | general (e.g., non-parameterized) learning problems, and it allows increasing confidence | ||
+ | in hypothesis selection with increasing input set size. Importantly, | ||
+ | induction problem entails learners choosing among formal descriptions that they have | ||
+ | already generated from the data using general purpose mechanisms (Saffran, Reeck, | ||
+ | Niebuhr, & Wilson, 2005) or that are part of their innate endowment for language | ||
+ | (e.g. Valian, 1990). For example, Saffran et al. (2005) demonstrated that the structure of | ||
+ | the input determines the primitives (in this case absolute vs. relative pitch) over which | ||
+ | generalizations are made. This type of research, in which learners ‘choose’ among | ||
+ | different generalizations allowed by input data, may ultimately allow us to distinguish | ||
+ | between theories of language development. |