According to the Cognitive Perspective of Perceptual Development, Babies

Adv Child Dev Behav. Author manuscript; available in PMC 2020 Jun 15.

Published in final edited form as:

PMCID: PMC7294583

NIHMSID: NIHMS1582270

How Does Experience Shape Early Development? Considering the Role of Top-Down Mechanisms

Abstract

Perceptual development requires infants to adapt their perceptual systems to the structures and statistical information of their environment. In this way, perceptual development is not only important in its own right, but is a case study for behavioral and neural plasticity—powerful mechanisms that have the potential to support developmental change in numerous domains starting early in life. While it is widely assumed that perceptual development is a bottom-up process, where simple exposure to sensory input modifies perceptual representations starting early in the perceptual system, there are several critical phenomena in this literature that cannot be explained with an exclusively bottom-up model. This chapter proposes a complementary mechanism where nascent top-down information, feeding back from higher-level regions of the brain, helps to guide perceptual development. Supporting this theoretical proposal, recent behavioral and neuroimaging studies have established that young infants already have the capacity to engage in top-down modulation of their perceptual systems.

1. INTRODUCTION

How does experience support development? This is not a question about the trajectory of development (e.g., "what happens when"), nor the relative importance of experience vs innate capacities. This is a question about the means or mechanisms by which one factor, experience, shapes a developing brain. To investigate how experience supports development, most accept that experience plays some important role and that assumption is well supported in perceptual development. Building perceptual systems that adaptively reflect an individual's environment forms the foundation of more complex perceptual-cognitive abilities, such that language comprehension is supported by changes in speech perception; and social development is supported through the development of face perception. Regardless of what balance of innate or experience-driven mechanisms is at play in perceptual development, it is widely accepted that an infant's individual experience plays some crucial role. For example, infants adapt themselves to their native language and the faces of their communities, but babies of English-speaking parents are not more prepared to speak English. Thus, perceptual systems must adapt themselves to the structures and statistical information that infants experience. In this way, perceptual development is not only an important topic in its own right, it is a case study for behavioral and neural plasticity—those powerful mechanisms that have the potential to support developmental change broadly starting early in life.

Thus, on one hand, perception is a domain that readily lends itself to studying how experience shapes development because it is uncontroversial that experience plays some crucial role. On the other hand, perceptual development is a difficult domain in which to investigate these questions because developmentalists have strong, preexisting assumptions about how experience shapes perception. Specifically, there are long-held beliefs that experience supports perceptual development through a passive accumulation of experience. That is, the type of sensory input that a developing individual receives helps to solidify internal perceptual representations. This is a bottom-up view of perceptual development as part of the foundation upon which research in this area has been built.

This chapter proposes a complementary mechanism by which experience can shape perceptual development and development in general. Specifically, a thesis of this chapter is to propose that experience can shape perceptual development in a top-down fashion. Experience with patterns and structures in the environment allows the engagement of high-level cognitive systems, such as learning and memory and attentional/executive function systems, and these higher-level systems can exert a top-down influence on perception. These top-down connections can shape the pattern of activity and responses in perceptual systems and, thus, are part of the mechanism by which experience affects perceptual development. In addition, establishing these top-down connections can also start to create the larger networks in the developing brain that will support the development of cognitive function more broadly.

In this chapter, I will start by briefly addressing the widely held but largely implicit assumption that perceptual development is bottom-up. Then, I will present evidence for why a bottom-up model is not sufficient to explain a number of behavioral phenomena, nor the organization of mature perceptual systems seen in adults. I propose that the presence of nascent top-down influences on developing perceptual systems will better explain these key behavioral phenomena and provide new insight into how experience is shaping perceptual development. In addition, I will survey the current evidence, both behavioral and neural, that young infants already have the capacity to engage in top-down modulation of their perceptual systems.

2. AN (IMPLICIT) BOTTOM-UP MODEL OF PERCEPTUAL DEVELOPMENT

Perception is not a single ability and is often considered a range of abilities from low-level (e.g., visual acuity) to high-level functions (e.g., face perception). In this chapter, I focus on the perception of high-level stimuli such as speech sounds and faces because the role of experience is quite clear compared to lower-level perception. The discovery of perceptual narrowing of faces (Kelly et al., 2007; Pascalis et al., 2005) and speech (Werker & Tees, 1984) with important parallels in multisensory processing (Lewkowicz & Ghazanfar, 2006) has sparked great interest. Very young infants have the capacity to equally perceive all faces (e.g., human and nonhuman primate faces) and all speech sounds (i.e., those of many different languages), and with development, infants lose their ability to discriminate both faces and speech sounds that are not present in their environment. In other words, young infants are initially better at perceiving than adults, because adults exhibit diminished perceptual abilities for faces and speech that are not present in their environment vs those that are. The investigation of these phenomena has resulted in a strong case that experience matters for the development of face and speech perception. This may simply be because experience with faces and speech is not uniform across all infants. Infant A who will learn language A will be surrounded by speech sounds that differ markedly from infant B who will go on to learn language B. Thus, it is easier to see when the perception of infant A differs from infant B and attribute that difference to their differential experiences. It has been well documented that infants change their perception to fit their specific environments starting after 6 months (for reviews across domains see Maurer & Werker, 2014; Scott, Pascalis, & Nelson, 2007). Knowledge of the developmental time course of these effects has allowed researchers to manipulate the types of experience that infants receive during this period and demonstrate experience-dependent effects on perceptual narrowing (Scott & Monesson, 2009).

Compare this to the question of whether visual experience is important for the development of lower-level perceptual abilities, for example, visual acuity. The developmental trajectory of increases in visual acuity has been a focus for decades and is relatively well determined (Arteberry & Kellman, 2016), but it is difficult to uncover the role of early experience because infants have, by-and-large, similar experiences (e.g., with objects in the world, natural images). Thus, tackling the question of how experience matters for the development of lower-level perceptual abilities is much more difficult: Daphne Maurer and her research team have recruited adults affected by cataracts early in life to investigate the effects of early visual deprivation on visual development (Maurer, Lewis, & Mondloch, 2005), Helen Neville and colleagues investigate adults who are blind or deaf to examine perceptual plasticity (Neville & Bavelier, 2002), and a number of groups are starting to investigate these questions with premature infants to determine whether additional weeks of extra uterine experience affect perceptual development (Bosworth & Dobkins, 2009; Dobkins & Mccleery, 2009; Peña, Pittaluga, & Mehler, 2010). Animal models are well suited to addressing these scientific questions. In their classic work, Hubel and Weisel found striking evidence that typical experience is necessary for the development of receptive fields in early visual cortex (Wiesel & Hubel, 1963). However, asking how experience supports development of low-level perceptual abilities in humans has been difficult both pragmatically and interpretively, and thus the role of experience is less clear. For these reasons, this chapter will focus on high-level perceptual abilities, such as face and speech perception and the phenomenon of perceptual narrowing.

There is strong empirical evidence that experience shapes face and speech perception, yet there is little evidence as to how experience shapes the development of these perceptual abilities. There are, however, strong and often implicit assumptions that experience shapes perceptual development through bottom-up processes. What do I mean by bottom-up processes? In this context, bottom-up refers to a passive accumulation of experience. Perceptual development is entirely stimulus-driven without the requirement of any cognitive engagement from the infant. In this view of perceptual development, sensory experience strengthens or solidifies internal perceptual representations starting from the lowest levels of the perceptual system and moving to higher levels. Relatedly, the information flow of bottom-up processing is unidirectional starting from the lowest levels of the visual hierarchy and moving upward toward cognitive systems without any feedback of information from these higher-level systems to lower-level systems (see Table 1, left panel; Fig. 1).

An external file that holds a picture, illustration, etc.  Object name is nihms-1582270-f0001.jpg

Boxological representation of bottom-up vs top-down models of perception.

Table 1

Characteristics of Bottom-Up and Top-Down Models of Perceptual Development

Bottom-Up Top-Down
Passive: experience supports changes through the simple accumulation of experience Active: certain types of experience produce engagement in higher-level systems which can change perception
Same sensory experience = same changes in perception Same sensory experience can yield different changes in perception depending on structural/cognitive/task context
Unidirectional, feedforward flow of information from lowest levels of perception to cognitive systems Feedback creates a bidirectional flow of information across the levels of perception and cognition
Experience changes perception in a feedforward, hierarchical fashion (i.e., lower levels before higher levels) Higher-level cognitive abilities (e.g., learning/memory, attention, reward, executive function) can influence perception at any level
Experience changes perceptual representations/sensitivity Experience may change the interpretations of perceptual information but not necessarily perceptual sensitivity

This bottom-up model is well reflected in Hubel and Weisel's classic studies demonstrating that the development of the visual system is affected by early visual experience. By manipulating whether kittens received normal or substantially altered experience, they found that sensory input, as opposed to maturation, shapes the receptive fields of the lateral geniculate nucleus (LGN, the first location of visual processing in the brain after the retina) and then the receptive fields of the early visual regions of the cortex (i.e., V1 or striate cortex). This is a classic bottom-up model where development is presupposed to occur through the amount and type of sensory input. It also catalogs development starting from the periphery of the nervous system where experience is first encountered and onward up the visual hierarchy. Another important feature of this bottom-up model is that representation or sensitivity to sensory input is being shaped (i.e., kittens with substantially altered visual input exhibit reduced neural sensitivity to certain types of sensory input). This model assumes that no engagement is required by any system or region of the brain beyond the lowest levels of the visual system where receptive field changes are occurring. Sensory input shapes the LGN then V1 and does not require the engagement of higher-level systems such as attention or learning/memory. Finally, this bottom-up model also heavily relies upon the notion of critical periods where early sensory input results in changes in the visual system but later sensory input does not result in changes. Thus, the classic model of perceptual development suggests a highly inflexible adult visual system where sensory input no longer shapes perceptual function.

This model of how experience shapes perceptual development has become the implicit assumption of developmental psychology regardless of what type of perceptual ability is being considered (e.g., low level as was investigated by Hubel and Weisel vs high level that is the topic of this paper). This assumption of a bottom-up model of perceptual development can be seen in the framing of how experience leads to perceptual narrowing. Researchers will often characterize the structure of the infant's environment as the cause for perceptual narrowing. For example, researchers often state that infants become attuned to the more frequent perceptual exemplars in their environment (Scott et al., 2007). In this way, they have assumed that the infant's perceptual system is simply a reflection of the biases that are present in experience that they receive. Perceptual narrowing occurs as infants soak up sensory input, and it just so happens that their experiences are biased toward some exemplars over others (e.g., their native language), so their perceptual abilities will naturally reflect that. Similarly, researchers often point to the raw amount of experience overall as being the main way that experience can contribute to changes in perception. For example, Maurer and Werker (2014, p. 171) point to the "simple accrual of experience" as being the candidate way that experience can result in perceptual narrowing.

Overall, there is an assumption that infants are passively soaking up their perceptual experience, and in this way their perceptual systems come to reflect the structure of their environment. This assumption reflects a view of perceptual development as a bottom-up process where experience shapes development by simple exposure, whereby perceptual systems become a kind of summary of early sensory experiences. This bottom-up view is attractive for a number of reasons. First, it follows from a key historical model (i.e., put forward by Hubel and Weisel). Second, it broadly fits the pattern of perceptual narrowing where biases in environmental exposure result in biases in perceptual abilities (e.g., language A vs language B). A logical leap is that it is this heterogeneity in sensory experience alone that drives changes in perceptual experience (i.e., it is bottom-up or entirely stimulus-driven). Third, it is parsimonious and requires information flow in only a single direction. However, there are empirical findings that cannot be accounted for by a purely bottom-up model of perceptual development. I review these challenges to a bottom-up model next.

3. CHALLENGES TO THE BOTTOM-UP VIEW OF PERCEPTUAL DEVELOPMENT

Several findings are not readily accommodated by a bottom-up model of how experience shapes the development of perception. These findings suggest that infants are not simply passively soaking up their experience and their developing systems are not simply a summary of their sensory input, as is predicted by a bottom-up model. Further, the understanding of the adult perceptual system has shifted radically in the last decade. There is a renewed focus on how top-down processes enable effective perceptual systems that are highly attuned to the environment. Given that earlier views of the adult perceptual system were also dominated by bottom-up accounts, this shift in the understanding of the developed system begs a corresponding reconsideration of perceptual development and the role of top-down processes within it.

3.1. Infants Do Not Passively Absorb Sensory Experience

A bottom-up model is committed to the fact that if sensory input is the same, then the perceptual outcomes will also be the same. That is, if two infants see the same distribution of faces (i.e., which ones are frequent and which ones are infrequent), then they will develop the same perceptual abilities. The commitment follows directly from the view that sensory input is driving perceptual development without the involvement of any other system (see Table 1). However, there have been influential demonstrations that the same sensory input does not result in the same perceptual outcomes. In the seminal study by Scott and Monesson (2009), 6-month-old infants were given the same visual experience with monkey faces (via a storybook read by their parents), while some infants received variations to the linguistic context. Over 3 months, one group of infants simply saw the pictures, another group saw the pictures labeled with the category label "monkey," and the last group saw the pictures but each monkey had their own name (i.e., the individuation group). According to a bottom-up model of perceptual development, these infants should all have the same outcomes as they all viewed the monkey faces during the same period of time, and yet only infants in the individuation group, where each monkey had its own name, maintained their discrimination of monkey faces. In other words, having parents give proper names for each monkey pictured somehow affected infants' visual development beyond their visual experience.

Similar examples have been found in audition for speech perception. Yeung and Werker (2009) found that 9-month-old infants were able to discriminate between two nonnative speech sounds but only when each sound type was paired with its own novel object. Infants used lexical context to perform this difficult auditory discrimination when auditory experience alone was not enough. Similarly, Thiessen (2007) examined word learning in a well-known context where infants were able to discriminate two speech sounds when the sounds were presented in isolation (daw/taw) but failed to make use of this auditory information in a word-learning task (e.g., cannot learn that one object is a daw and the other object is a taw; see Stager & Werker, 1997). Thiessen (2007) found that when infants were familiarized with this distinction in combination with a perceptually salient distinction (daw-bow/taw-goo), they were able to use this structural or distributional information to perform the difficult word-learning task even when the daw/taw distinction was again presented in isolation. Thus, in both of these examples simple auditory experience with speech contrasts is not sufficient to drive changes in perception or the use of perceptual information, but the augmentation of this auditory input with additional structural information supported better perception.

There are some suggestions that the nature of perceptual processing after perceptual narrowing may not be consistent with a purely bottom-up model. Specifically, because a bottom-up model affects the lowest levels of the perceptual system first and is believed to shape perceptual representations, it is predicted from this model that a perceptual narrowing should result in a reduction of perceptual sensitivity of nonnative or infrequent stimuli. Perceptual narrowing reduces behavioral performance in recognition tasks, and there have been a number of findings that show perceptual narrowing might affect the efficacy of the interpretation of nonnative stimuli (i.e., later stages of perceptual processing) as opposed to a reduction in perceptual sensitivity. As reviewed in Scott et al. (2007), researchers reported that in the absence of behavioral demonstrations of perceptual discrimination infants demonstrate neural discrimination. Specifically, in circumstances where infants do not reliably respond to changes in face or speech identity for nonnative categories (e.g., faces from nonexperienced groups), there remain systems in their brain that are sensitive to changes (however, see Grossmann, Missana, Friederici, & Ghazanfar, 2012 for a contrary example in cross-modal face-voice matching). Wu et al. (2015) found evidence in adults of intact early ERPs after perceptual narrowing but altered late ERP components. While there are a number of possible explanations for these results, these studies do establish that the absence of behavioral responses is not necessarily indicative of an absence of perceptual representations or a reduction in perceptual sensitivity to nonnative stimuli as would be predicted by a bottom-up model of perceptual development.

Similarly, Markant, Oakes, and Amso (2016) investigate whether selective attention may play a role in the decline of perception for other-race faces. These researchers used an ingenious attentional manipulation, relying on inhibition of return, to control the neural engagement of attentional systems of infants. Inhibition of return is a phenomenon where a reflexive or automatic shift of attention to a particular spatial location (e.g., orienting toward a sudden flash on a screen) is followed by an internal inhibition to return attention to that spatial location and a tendency to turn attention to a new location. Using this attentional phenomenon allowed these researchers to ensure that infants were perceiving native and nonnative stimuli with the same attentional processes, and they found that when selective attention is controlled this way, the other-race effect disappears. Convergent with the neural results presented earlier, Markant et al. (2016) suggest that the other-race effect may not be exclusively driven by declines in perceptual sensitivity or perceptual representations but differential engagement of high-level systems such as selective attention. Finally, intriguing results by Hadley, Pickron, and Scott (2014) suggest that the effects of perceptual training are not stimulus specific and that sensory input during development is not simply fine-tuning specific representations but permits infants to develop higher-level perceptual skills. Infants who received experience with nonnative visual categories (monkey faces or strollers) with individual, proper names (as in Scott & Monesson, 2009) were followed up at 4 years to examine how this early experience affected their neural processing of human faces and the nonnative visual categories for which they received training. Surprisingly, these children had more mature neural responses to human faces but did not exhibit any differences to the nonnative categories that they were exposed to during training. Infants who received individuation training for a nonnative perceptual category became better at processing human faces years later but did not retain better individuation for the nonnative categories for which they received. These findings suggest that the additional, early perceptual experience, which has lasting impacts on perception during infancy, is not affecting perceptual representations for the nonnative stimuli but is building more generalizable perceptual skills. Hadley et al. (2014) suggest that this additional training boosts infants' abilities to individuate faces and does not directly affect infant perceptual representations.

Taken together these findings are difficult for a bottom-up model of perceptual development to accommodate, and suggest that additional, complementary models should be considered. A bottom-up model requires that the specifics of the sensory experience be the essential factor that is driving changes in perception. However, research has shown that sensory experience with the relevant stimuli (e.g., faces or speech) on its own is not sufficient to drive changes in perceptual development. Instead, infants are sensitive to the structure of the environment that supports these perceptual distinctions (Scott & Monesson, 2009; Thiessen, 2007; Yeung & Werker, 2009). Moreover, a bottom-up model is committed to experience affecting perceptual representations and not other processes, and these representations should be at the lowest possible level of perceptual processing. Again, there are strong counterexamples. There are cases where infants retain neural sensitivity but lose behavioral sensitivity, suggesting that their lower-level perceptual representations are unaffected. Similarly, recent findings suggest that selective attention may partly explain perceptual narrowing (Markant et al., 2016) and that perceptual experience might not affect perceptual representations per se but broader, general perceptual skills (Hadley et al., 2014).

3.2. Top-Down Processes Support Effective Perception in Adults

Another reason to revisit a purely bottom-up account of perceptual development is that the adult perceptual system is highly affected by top-down information.

Historically, adult perception was studied as a bottom-up, feedforward system where information is processed hierarchically. The dominant view was also that the perceptual system was inflexible in adulthood and does not change based on sensory input later in life (i.e., the critical period hypothesis). However, the last two decades of research has uncovered the myriad of ways in which adult perceptual systems continue to be shaped by experience (both in the short term and over longer periods of time) and how permeable perception is to the influence of top-down connections from other systems. This research has amounted to a recharacterization of adult perception as being relatively plastic or changeable and the nexus of both bottom-up and top-down information. This shift of understanding of adult perception begs the question: How and when does a highly top-down, adult perceptual system develop? If experience supports changes in perception in a strictly bottom-up fashion during development, why does the adult system readily employ top-down mechanisms to shape perception in response to experience and task demands? This section summarizes recent findings of top-down influences on perception in adults (also see Gilbert & Li, 2013).

Consider the primary visual cortex (V1, the earliest cortical visual area). This region has been viewed as the canonical bottom-up perceptual region as it is the first in the cortex to receive incoming visual information ascending from the retina. Yet, the majority of the neural information arriving at the primary visual cortex is not bottom-up sensory input but rather is top-down or feedback pathways from the rest of the brain. V1 receives five to eight times more excitatory feedback or top-down connections than it receives in feedforward information (i.e., 45% feedback vs 6–9% feedforward from LGN; see Latawiec, Martin, & Meskenaite, 2000).

Fig. 2 provides a visualization of the bottom-up, feedforward visual system classically studied along with the myriad of feedback connections received by the adult visual system. Some of these feedback connections are within the visual system itself with higher-level areas feeding back to lower-level areas but also from regions and systems far beyond the visual system such as the frontal cortex. As summarized by Gilbert and Li (2013, p. 350), "[f]or every feedforward connection [in the adult visual system], there is a reciprocal feedback connection that carries information about the behavioral context." Thus, from a purely neuroanatomical perspective, top-down information is a significant factor influencing the activity of the adult visual system.

An external file that holds a picture, illustration, etc.  Object name is nihms-1582270-f0002.jpg

Feedback pathways carrying top-down information to the early visual system in the adult brain. Blue arrows represent the feedforward, bottom-up connections across the hierarchy of visual regions. Red arrows represent feedback, top-down connections. Information from diverse cortical regions and systems is conveyed in top-down fashion to the visual system: Motor/efference copy, attention, expectation, and executive functions. Reproduced from Gilbert, C. D., & Li, W. (2013). Top-down influences on visual processing. Nature Reviews Neuroscience, 14, 350–363 with permission.

In addition to the neuroanatomical argument for the importance of top-down information, it has been well documented that these feedback connections have functional significance. Top-down influences in perception take a variety of forms, including the effects of attention, expectation, learning/memory, and other more general task demands. While all of these types of top-down influences have the potential to play a role in perceptual development, this chapter will focus on effects of expectation and learning/memory. For an excellent review highlighting the potential role of attention in visual development, see Amso and Scerif (2015).

Experimental task and prior experience have broad effects on behavior, and often these effects are attributed solely to the feedforward modulations of higher-level systems. However, there are many circumstances in which aspects of either prior experience or task modulate behavior by directly affecting perception. For example, in a continuous flash suppression paradigm, Lupyan and Ward (2013) found that linguistic labels (e.g., "chair") boosted sensory input that was below a perceptual threshold to conscious awareness. This is a clear example where a memory or piece of learned information about the world (e.g., what the visual features of a chair are) is being used in a top-down fashion in the adult brain to change what is being perceived. This is simply one of many examples where predictions about future sensory input or learned information about the world has an affect on perception (see Hutchinson & Turk-Browne, 2012; Lupyan, 2015; Panichello, Cheung, & Bar, 2013; Summerfield & Egner, 2009 for reviews on impacts of top-down predictions and memory on perception).

Since expectations about the environment bias perceptual processing, where do these signals come from? One possibility is that what has been learned about the environment has simply shifted the representations in the perceptual system, and in this way, these effects of learning or expectation could be entirely consistent with a bottom-up model.

However, data strongly suggest that this is not the case and that these predictive signals that change perception originate from outside the visual system. Summerfield et al. (2006) found that when adults make decisions about ambiguous sensory input, these decision signals originate in the frontal cortex. Specifically, they found that the presentation of a given stimulus (e.g., a face) modulates bottom-up connections within the visual system and that expectation for a specific stimulus (e.g., being in a block with many face stimuli that results in a face expectation) strengthens top-down connections from the frontal cortex to the visual system (see Fig. 3). Similarly, Bar et al. (2001) found that adults activate their frontal cortex when experiencing unclear or difficult to process perceptual information (see Bar, 2003 for a specific theory as to how these prefrontal cortex signals may be biasing visual perception and Fig. 3).

An external file that holds a picture, illustration, etc.  Object name is nihms-1582270-f0003.jpg

Classic evidence of frontal lobe involvement in visual perception. Left panel, Bar et al. (2001) found increased frontal lobe activity when visual images were masked (i.e., are difficult to perceive) but not when stimuli were unmasked (i.e., easy to perceive). Right panel, Summerfield et al. (2006) found that when the probability of seeing a certain type of stimulus (e.g., a face in a block when there were a high proportion of faces presented), there are increases in the top-down connections from the frontal lobe to the visual system. Figures reproduced with permission.

In line with Bar et al. (2001), it is important to note that these top-down effects are clearest when bottom-up or sensory information is weak or ambiguous. When there are reasons to doubt or have uncertainty about bottom-up sensory input, and there is a strong top-down source of information, it is rational for an individual to rely on the top-down signal. However, this does not mean that top-down signals are only effective during these specialized laboratory tasks. Some have argued that, given the complexity of sensory input in combination with the highly structured nature of the environment, perception in the natural environment is most like these ambiguous laboratory contexts (Summerfield & de Lange, 2014).

In addition, consider the circumstances where there is strong, unambiguous bottom-up sensory information that is in conflict with top-down signals or expectations. In this case, the difference between top-down and bottom-up signals can be highly informative and could effectively guide learning. Predictive coding, a prominent theory of cortical processing, proposes that top-down connections are ubiquitous and are essential to the functioning of the brain. Predictive coding proposes that cortical activity is the relation between incoming sensory input and top-down expectations where expectations arise from each level of the cortex are conveyed to previous levels of the system (e.g., color expectations in V4, a region sensitive to color information, are sent back to regions that provide bottom-up input to V4). Then, each incoming sensory input is compared to the expectations and the difference between these signals is propagated forward (Friston, 2005; Rao & Ballard, 1999). There has been some support for predictive coding (though mostly in imaging studies, see den Ouden, Daunizeau, Roiser, Friston, & Stephan, 2010; Emberson, Richards, & Aslin, 2015; Kok, Failing, & de Lange, 2014; Summerfield et al., 2006; Summerfield & Koechlin, 2008; Summerfield, Trittschuh, Monti, Mesulam, & Egner, 2008). While future work is needed to support this specific theory of the brain, predictive coding has sparked widespread interest in the role of expectation or prediction in cognition and perception as many researchers have argued that prediction explains a wide variety of phenomena (see reviews by Clark, 2013; Lupyan, 2015).

Thus, there is strong evidence that adults can use previously learned information and expectations about future sensory input to exhibit top-down control on perception. These are examples of the role of top-down information influencing perception in adulthood. Moreover, neuroimaging studies have demonstrated that the origin of these top-down signals is beyond the visual system often originating in the frontal cortex.

Researchers have found that top-down effects on perception function more than to exert transient or trial-by-trial effects, but can also guide perceptual learning over longer timescales. For example, researchers in the field of perceptual learning have investigated how experience can result in changes in perception in adulthood (also called visual plasticity). Typically, these changes in perception are low level and specific to given stimuli and even regions of the visual field (e.g., a given retinotopic region of the cortex). While traditional perceptual learning was seen as a bottom-up process by which additional experience with a given stimulus shapes specific perceptual representations, a few decades of work has established that perceptual learning is highly sensitive to top-down factors. For example, Seitz, Watanabe, and colleagues have found that amount of perceptual experience is only one factor that drives perceptual learning, and reward and attention are extremely important in modulating whether experience results in changes in low-level perception (Roelfsema, van Ooyen, & Watanabe, 2010; Seitz, Nanez, Holloway, Tsushima, & Watanabe, 2006; Seitz & Watanabe, 2005). Providing convergent findings to this behavioral work, Li, Piëch, and Gilbert (2004) examined functional properties in early visual cortex of monkeys trained on novel objects in two different tasks. They found changes in functional properties in these neurons as a result of experience, and that learning was highly sensitive to the task that the monkeys were engaged in. This finding again demonstrates that bottom-up sensory input is not the only factor affecting how experience changes perception. Similar findings have been documented in the auditory system (Polley, Steinberg, & Merzenich, 2006). Ahissar and Hochstein (2004) have proposed the Reverse Hierarchy hypothesis and argued that perceptual learning occurs in a top-down fashion with a preference for shifts at the highest possible levels of the visual system and only affects lower levels when changes to higher levels are insufficient.

Recent work has demonstrated that, in addition to experimental task, reward and attention, learning and memory systems can also spark changes in perception. Emberson and Amso (2012) gave adults' experience with novel visual stimuli. Through the integration of variable visual experiences, participants would be able to create a novel object representation to change their perception. Despite receiving the same sensory experiences, strong individual differences were found in roughly half of the individuals who were studied when they changed their perception and the other half persisting with their initial, prestudy percepts. fMRI recordings during these new experiences also revealed that activity in multiple learning and memory systems differentiated those who changed their percept with experience from those who did not. This finding further demonstrates that bottom-up sensory input is not sufficient to produce changes in perception and higher-level systems play a role in changing perception as a result of experience (see similar findings by Stokes, Atherton, Patai, & Nobre, 2012).

Overall, these findings challenge classic, bottom-up models of perception in several ways. First, demonstrations of perceptual learning in adulthood challenge the view that, after early development, perceptual systems are inflexible and largely resistant to change as a result of new experiences. Second, in adulthood, perception and perceptual learning are highly influenced by top-down information. While these sensory experiences are necessary for perceptual learning to take place, they are not necessary and sufficient. Instead, top-down processes, such as attention, reward, and learning/memory systems, are responding to this experience and helping to adaptively shape adult perceptual systems. Third, demonstrations that top-down information influences visual perception at the behavioral level and the neural activity in these regions also suggest that the adult perceptual system readily takes advantage of top-down information to boost perceptual capacities. Finally, the anatomical organization of the adult brain is highly bidirectional with the bottom-up connections complemented with numerous top-down connections both within a given perceptual system and far beyond.

4. COULD TOP-DOWN INFORMATION SHAPE PERCEPTUAL DEVELOPMENT?

In the previous section, two broad challenges to an exclusively bottom-up model of perceptual development were presented: First, findings from the developmental literature that are not compatible with an exclusively bottom-up model of perceptual development; Second, evidence for the pervasive influence of top-down processes in the adult perceptual system and that top-down information guides how experience changes perception later in life. Focusing on the latter, since top-down information has been established to adaptively shape adult perception in response to the environment, this raises a distinct possibility that these same top-down processes are involved in helping to shape perceptual development early in life. Moreover, it could be the presence of these top-down mechanisms that account for the discrepancy between an exclusively bottom-up view of perceptual development and the findings reviewed previously where perceptual experience is necessary but not sufficient to explain changes in infant perception. Infant perceptual development is sensitive to the structure of the environment in which this sensory experience is embedded (Scott & Monesson, 2009; Thiessen, 2007; Yeung & Werker, 2009). So too has adult perceptual learning been found to be sensitive to context in which this information is provided (though in adult perceptual learning "task" is typically studied rather than structural context; Li et al., 2004; Seitz et al., 2006).

Yet key questions remain to be resolved: What is the top-down model of development? What commitments does it make? What phenomena (behavioral or neural) should we expect to observe if top-down processes are affecting development? These are difficult questions to answer in part because the definition of top-down is being debated. At one extreme, top-down mechanisms are defined as volitional, available to consciousness, goal-directed, and must originate entirely outside any perceptual system (i.e., the feeding back of higher levels to lower levels within the visual system is not considered top-down in this view; Firestone & Scholl, 2016). This is consistent with a modular view of perception and is largely grounded in behavioral evidence. At the other extreme, any feedback between any cortical regions (e.g., from V2 to V1) could be considered top-down as it entails some amount of feedback. This is consistent with predictive coding, a largely neuroanatomically based theory of cortical function (Friston, 2005; Rao & Ballard, 1999). Moreover, the most successful contemporary computational model of human vision is described as an entirely feedforward system in that it does not explicitly incorporate feedback connections or information about cognitive task (Yamins et al., 2014). However, this model does employ supervised learning algorithms that provide a clear feedback signal to the model. Interestingly, the model is trained to match a picture to its word referent (e.g., to predict the word bunny when seeing a picture of a bunny). It is unclear whether this model is exclusively bottom-up, in line with the classic models outlined in this chapter as the method of training is surprisingly similar to the behavioral study by Scott and Monesson (2009) that involved providing infants with individual names of monkeys. Thus, the models (Yamins et al., 2014) seem different from the passive, stimulus-driven models discussed here despite their claim that they are exclusively bottom-up. Overall, the line between bottom-up and top-down processes can be unclear, and the definition of what is top-down is currently under debate.

The definition of top-down employed here plots a middle course between the two extremes and is meant to guide future work into the developmental origins of top-down processing. Specifically, I broadly divide the hierarchy of processing into three parts: lower-level perception, higher-level perception, and cognitive. Information can be considered to be top-down if it jumps between these parts (Fig. 1). For example, the feedback from demonstrable cognitive processes such as learning and memory or reward systems to perception at either level would be considered top-down (e.g., from Emberson & Amso, 2012, the influence of the hippocampus and basal ganglia in shifting object representations in adults). If higher-level representations within a perceptual system (or information across perceptual systems) feed back to lower-level representations, this would also be considered top-down (e.g., Lupyan & Spivey, 2008 found that processing novel stimuli as familiar categories of visual objects such as letters affected their low-level visual processing of these objects). These individual parts are not precisely defined (e.g., Where is the boundary between high-level vs low-level perceptual processing? Why is reward considered cognitive?), and, instead, the focus here is, first, on the direction of the information flow where top-down is going from a point that is clearly further along the processing hierarchy and affecting processing earlier in the hierarchy. Second, the division into these parts is meant to shift the emphasis toward top-down processing which is of greater distance than predictive coding where distance can be defined anatomically or representationally. The inclusion of anatomical distance highlights the fact that for systems such as the frontal cortex and learning and memory systems to influence perceptual regions, they have large anatomical distance to cover, and the developing brain starts out with very few of these long-range connections and with a bias toward local processing.

What commitments does a top-down model of development make? Table 1, right panel, presents the corresponding characteristics of a top-down model to the bottom-up model already discussed. Importantly, while a bottom-up model requires that the same sensory input results in the same changes in perception, a top-down model allows that the same sensory input can result in differences in perceptual outcomes if there are differences in the structural/cognitive/task context in which the sensory input is received. A top-down model allows that experience can shape perception not only through tuning perceptual representation or sensitivity but also through shifts in the interpretation of this sensory input. Finally, while bottom-up models are passive and only require the simple accumulation of experience, top-down models are active and require the engagement of systems beyond the relevant perceptual region.

One of my goals is to argue that the investigation of top-down processes in early development is an important topic for future research. However, in the remainder of this section, I will review the literature for existing evidence that infants can engage in top-down modulation of their perceptual systems with particular emphasis on what pieces of evidence are suggestive but do not yet require invoking top-down mechanisms.

4.1. Neuroimaging Evidence for Top-Down Modulation in Infancy

The burgeoning field of developmental cognitive neuroscience is providing neuroimaging evidence for the availability of top-down mechanisms in infants. These findings support the early operation of systems that can exhibit a top-down influence (e.g., the frontal lobe) as well as evidence that perceptual systems are receiving and being modulated by top-down signals.

4.1.1. Prediction/Expectation Modulates Neural Activity in Perceptual Systems

Recent neuroimaging studies with infants provide convergent evidence that stimulus predictability is likely exerting a top-down influence on perceptual processing. Emberson et al. (2015) used functional near-infrared spectroscopy (fNIRS, an optical method of recording the hemodynamic response in the surface of the infant cortex that provides an alternative to fMRI for young developmental populations) to examine responses in the visual systems of 6-month olds when they predicted a visual stimulus. Specifically, infants were provided with a brief familiarization to learn that auditory cues predicted visual events. After this familiarization, infants were presented with a small number of trials where the auditory cue was not followed by a visual event (20% of trials). These unexpected visual omission trials provided an avenue for disentangling responses in relation to visual prediction and visual error without the confounding effect of presenting a novel stimulus. Consistent with the view that top-down mechanisms are available early in development, infants exhibited robust visual cortex responses to the auditory cue when the visual stimulus was unexpectedly omitted. Thus, the visual system responded when no visual stimulus was presented, indicating that top-down signals are modulating the visual system as early as 6 months of age and after only a couple of minutes of experience (see Fig. 4). a

An external file that holds a picture, illustration, etc.  Object name is nihms-1582270-f0004.jpg

Emberson et al. (2015) and Kouider et al. (2015) provide direct evidence of top-down effects of expectation and feedback on perceptual processing in young infants. Top panel, functional near-infrared spectroscopy (fNIRS) recordings of the temporal and occipital cortex of 6-month-old infants revealed expected sensory cortex responses to both audiovisual stimuli (A+V+) and occipital lobe activity during the presentation of an auditory cue, but the unexpected omission of the visual stimulus (A+V-omission). This occipital cortex response is not a response to auditory only stimuli when infants are not expecting the visual stimulus (A+V-control, Emberson et al., 2015). Bottom: Twelve-month-old infants were presented with visual stimuli that were either validly predicted by an auditory cue or invalidly predicted. The validity of the cue modulated two components of the event-related potentials (ERPs, Kouider et al., 2015). Figures reproduced with permission.

Other researchers (Kouider et al., 2015) report similar findings that expectation modulates perceptual processing where infants learned audiovisual pairings and visual events were either presented to be inconsistent or consistent with the preceding auditory event. Twelve-month-old infants exhibited differences in both early and late ERP components when the visual event was incongruent vs congruent with the preceding auditory cue. Specifically, there was a facilitation of the early ERP component when the visual event was congruent, suggesting top-down facilitation of visual Processing as a result of cross-modal associations and prediction. The opposite pattern was observed for the late ERP component where incongruent visual events exhibited a stronger late slow-wave response, suggesting a difference in error processing or perceptual consciousness (Kouider et al., 2013).

Turning to the auditory system, Nakano, Homae, Watanabe, and Taga (2008) found that sleeping 3-month-old infants exhibit anticipatory cortical activation after experience with auditory events that demonstrate the presence of these effects in the auditory modality. Although the use of unimodal stimuli here makes the argument for top-down mechanisms less conclusive, these results are suggestive of similar top-down modulation, especially in combination with findings by Emberson et al. (2015) and Kouider et al. (2015).

Overall, these results suggest that the infant brain is exquisitely sensitive to patterns of sensory input and rapidly modulates perceptual systems in the face of new experiences. Indeed, Basirat, Dehaene, and Dehaene-Lambertz (2014) suggest that young infants encode expectations in computationally sophisticated ways. Consistent with findings from adults (Wacongne et al., 2011), infants were exposed to temporal sequences with both local and global structures or expectancies. The authors found differences in infant ERPs to violations of local vs global expectancies, suggesting that error processing occurs in a hierarchical fashion by 3 months of age. Similarly, researchers have investigated responses to odd-ball stimuli in infancy and found evidence for sophisticated responses to novel or deviant stimuli in young infants (Ackles, 2008).

In sum, a number of neuroimaging studies with young infants have suggested that early in development, perceptual system is being modulated by top-down signals. Specifically, the top-down signals being received are in the context when infants are given a chance to predict or expect future input. In these cases, perceptual systems respond differently depending on the expectancy of that stimulus input in a way that cannot be explained by low-level adaptation effects (e.g., repetition suppression). Moreover, these studies have revealed sophisticated neural responses by young infants to structure in the environment and suggest that early learning systems exert top-down influences on perceptual systems starting early in life.

4.1.2. Availability of Frontal Systems Early in Development

The previous section reviewed direct evidence for top-down modulation of perceptual systems. The field of developmental cognitive neuroscience has also revealed the early availability of higher-level systems in young infants. Activity in these higher-level systems, often assumed to be silent or inactive early in development, suggests the possibility that these systems have the potential to be the origin of top-down information that is fed back to perceptual systems.

The higher-level system that has received the most focus in infants is the frontal lobe. This is in part because of the ease of recording this area using fNIRS as opposed to regions that are out of the field of view of this imaging modality (e.g., learning and memory systems: hippocampus, basal ganglia). Moreover, fNIRS recordings are particularly important because they allow spatial localization akin to fMRI that ERPs do not.

The investigation of frontal lobe function in infancy using fNIRS has revealed the surprisingly early involvement of this slow developing system. A long-held belief is that the frontal lobe is the last cortical system to develop as research has shown that changes occur into late adolescence and early adulthood (Conklin, Luciana, Hooper, & Yarger, 2007; Gogtay et al., 2004). There is support for this view as some executive functions show developmental changes into these later ages (for a review see Grossmann, 2013). However, these findings have led to the assumption that the frontal lobes are largely silent early in development and only come online later in life. In vivo neuroimaging with children and infants has challenged this view (see Stuss & Knight, 2013 for a number of excellent chapters on the development of the frontal lobes). Neuroimaging studies instead provide ample evidence that the frontal lobes are functioning early in life. Numerous experiments across a number of domains have found that the frontal lobe is very active in young infants (Grossmann, 2013). Here I focus on studies within domains that are most connected to experience-based perceptual development (i.e., prediction, learning/memory, modulating perceptual responses) and will review evidence that the infant frontal lobe could be the source of top-down information early in development.

A number of studies have found evidence that the infant frontal lobe is involved in processing stimulus novelty. Nakano, Watanabe, Homae, and Taga (2009) examined the temporal and frontal cortices of 3-month-old sleeping infants while they habituated to a single auditory stimulus (e.g., "ba") and then when this stimulus was changed (e.g., "pa"). They found dishabituation/novelty responses in the frontal lobe. Interestingly, there are no significant differences in response to habituated and nonhabituated auditory stimuli in the temporal lobe where it would be expected in adults (Zevin, Yang, Skipper, & McCandliss, 2010). This finding suggests that the frontal lobe is involved in novelty responses and change detection in infants, while perceptual cortices might be responsible for these functional responses later in development.

Similar findings were presented by Emberson, Cannon, Palmeri, Richards, and Aslin (in press) who examined frontal and perceptual cortex responses to repeated vs variable sensory input in both the auditory and visual modalities. Six-month-old infants exhibited repetition suppression, the attenuation of sensory cortex responses after repetition of a stimulus, but only in the auditory modality. This repetition suppression was found in both the frontal cortex and the temporal cortex where auditory perceptual processing occurs.

No changes in response to repetition were found for visual stimuli in either the frontal lobe or the occipital cortex where visual perceptual processing occurs. These findings provide direct evidence that the frontal lobe is tracking repetition and novelty in young infants, similar to the findings reported by Nakano et al. (2009), and provide indirect evidence that the frontal lobe might be involved in modulating perceptual systems in response to these changes in the environment. b

Overall, these findings suggest that young infants are employing their frontal lobes to track structure or novelty in the environment; however, the tasks that the infant frontal lobes engage in might be different than the tasks that engage the frontal lobe later in development. The activity in this classic higher-level system has been shown to be source of top-down information that modulates perceptual functions in adulthood (Bar et al., 2001; Summerfield et al., 2006). The involvement of the frontal lobe early in development in these types of circumstances provides the possibility that this higher-level region could be a source of top-down information in infants as well. However, future work is needed to more directly link activity in the infant frontal lobe with changes in perceptual systems.

4.2. Behavioral Evidence for Top-Down Modulation in Infancy

One of the major families of findings that has raised questions as to whether perceptual development is entirely bottom-up is that the structure underlying sensory input changes the way that perceptual experience shapes infant behavior either in the short term (Thiessen, 2007; Yeung & Werker, 2009) or long term (Hadley et al., 2014; Scott & Monesson, 2009). Learning about the structure of the environment (e.g., word-object associations) involves systems at a higher level than the kinds of perceptual processes being affected (e.g., speech perception). Demonstrations that infants are using their expanding knowledge about the structure of the environment to shape their perceptual processing are evidence for top-down influences on perceptual development. A complementary finding by Feldman, Myers, White, Griffiths, and Morgan (2013) suggests that infants can use their emerging knowledge of words (i.e., the lexicon) to change their perception of speech sounds. As articulated in Swingley (2008), it has been traditionally viewed that language development proceeds in a bottom-up fashion as infants learn the sounds of their native language, then use these sounds to learn words, and then use these words to learn syntax. However, it has long been known that these developmental stages are not discrete. Infants have knowledge of words well before they have solidified their perception of speech sounds (Bergelson & Swingley, 2012). If language development is not purely bottom-up in this way, how does it proceed? Findings like those presented by Feldman et al. (2013) suggest that infants may utilize knowledge of the structure of the environment to shape changes in their perceptual abilities. Building on findings presented earlier, this section will review and evaluate a number of other related studies showing that infants have the capacity for top-down modulation of their perceptual systems early in development.

4.2.1. Generalization From Prior Experience Supports Changes in Perception: Auditory

Research shows that infants can use prior experience to change their perceptual processing, but is this evidence that infants have employed top-down mechanisms? An excellent example of work in this area is in speech segmentation. The segmentation of fluent speech is one of the most difficult developmental tasks facing infants as they start to learn their native language. Natural speech does not have pauses at the word boundaries, and yet, we perceive each word as temporally distinct. Numerous mechanisms have been proposed to help infants with this difficult task. Bortfeld, Morgan, Golinkoff, and Rathbun (2005) found that 6-month-old infants can use a familiar name to segment a novel word from continuous speech. Specifically, they found that if infants were familiarized with a word they did not know (e.g., "bike") and it was consistently paired with either their name or the moniker used for their mother (e.g., "Mommy" or "Mama"), they showed evidence of segmenting and representing that word. However, this did not occur if a novel word was paired with another name or a word similar to, but not the same as, their mother's referent (e.g., "Tommy" vs "Mommy").

Is the ability to use a familiar word to segment fluent speech evidence that infants are employing top-down information to affect auditory processing? Bortfeld et al. (2005) present compelling work in contrast to those claiming that only bottom-up cues are used for speech segmentation in infancy (e.g., prosody). In the field of psycholinguistics, the use of lexical representations to affect speech perception is seen as a top-down effect because lexical representations are thought of as higher-level representations built from speech sounds. Thus, the flow of information across levels of representation contains feedback from a higher level (i.e., words, the lexicon) to a lower level (i.e., speech sounds).

Bortfeld et al. (2005) define top-down speech segmentation as "…using stored knowledge of the phonological forms of familiar words to match the portions of the speech stream and forecast locations of word boundaries" (p. 298).

However, is this the same meaning of top-down mechanisms that have been proposed in this chapter? I have presented accounts of how the use of a familiar word(s) to segment speech could be consistent with both bottom-up and top-down models of perception. One bottom-up explanation is that infants have discrete and relatively strong representations for their name and familiar words, such as mommy by 6 months (Mandel, Jusczyk, & Pisoni, 1995). These representations could have been created purely from the greater frequency of their occurrence in sensory input similar to the stronger representations of vertical lines in kittens exposed exclusively to vertical lines (Wiesel & Hubel, 1963). These stronger, frequency-produced representations might pop-out of subsequent sensory input, providing a type of bottom-up segmentation cue. Indeed, the attentional pop-out effect has traditionally been seen as a classic example of bottom-up attentional processes. Moreover, Bortfeld et al. (2005) make a clear parallel between their findings with infants and the cocktail party effect, where adults reflexively shift their attention when they hear their name. In this case, representations were created based on the simple frequency or amount of sensory input (i.e., does not require higher-level learning and memory systems to integrate across experiences), and that these memories result in what is effectively a bottom-up cue during perception (i.e., a pop-out effect) both would suggest a bottom-up explanation of this phenomenon.

However, there are alternative explanations that are consistent with the involvement of top-down processes. First, it could be that the use of familiar references is a top-down process of matching perceptual templates to incoming sensory input. However, for this to be a top-down process there must be some kind of expectation to hear one's name that is applied to the input. Such an expectation could very well have occurred for infants in this study as their names were repeated along with a novel word, across several sentences (e.g., "Maggie's bike had big, black wheels." or "The girl rode Maggie's bike." or "The boy played with Maggie's bike."). In this case, infants were not experiencing a pop-out effect of their names but rather they heard their name in each sentence and were using this representation to initiate an expectation or perceptual template that helped them segment speech by exerting a top-down influence on the lower levels of auditory processing.

A second, mutually exclusive, top-down explanation is that infants do experience a pop-out effect of their names, but this pop-out effect is only possible through the previous interaction of higher-level systems and the auditory system. Recent work with adults has challenged the view that all pop-out effects are purely bottom-up. Specifically, pop-out effects can be readily triggered by previous experience that involves the reward system (e.g., pairing a stimulus with a previous reward; Hutchinson & Turk-Browne, 2012) or structure in the environment (e.g., experiencing a familiar scene triggers higher-level learning systems that provide feedback information to direct attention and perception; Awh, Belopolsky, & Theeuwes, 2012; Stokes et al., 2012). Applying this logic to the case of infants' using their name to segment speech, the question becomes how did the representations of the infants' name and their mothers' moniker arise in the first place, and did the creation of these representations depend on higher-level systems? If the answer is yes (i.e., it is not purely about the frequency of experience of their names), then the use of these familiar words to segment fluent speech would be an example of top-down processes supporting language development.

4.2.2. Generalization From Prior Experience Supports Changes in Perception: Vision

In the domain of visual perception, Quinn, Bhatt, Needham, and colleagues have investigated the role of object knowledge and perceptual categories on visual perception. This literature does focus on lower-level visual processes (as opposed to faces and speech, the primary focus of this chapter), but nevertheless provides some direct evidence for top-down processes in the development of infant vision. Overall, these studies demonstrate that variable visual experience can overcome bottom-up visual processes starting very early in development (3–4 months). These paradigms often require infants to generalize from familiarization to test trials. In addition to these behavioral findings, convergent evidence from computational models and adult neuroimaging studies also suggests that top-down processes are required for these effects.

Quinn, Schyns, and colleagues conducted a series of studies examining bottom-up Gestalt perceptual-grouping principles in young infants to ask whether object knowledge is sufficient to override bottom-up, perceptual cues. At a young age (3–4 months), infants use perceptual-grouping principles to disambiguate novel sensory input. For example, in the top, middle panel of Fig. 5, the sensory input is consistent with either a closed circle or a pac-man shape present in the upper right of each of these figures. Consistent with Gestalt perceptual-grouping principles, infants perceive the circle and not the pac-man (Quinn, Brown, & Streppa, 1997). Given this finding, Quinn and Schyns (2003) provided infants with unambiguous experience with the pac-man shape (see top, left panel of Fig. 5) and then provided infants with this ambiguous experience. Across two experiments, infants showed evidence of using their new knowledge of the pac-man shape to override the bottom-up perceptual-grouping principles.

An external file that holds a picture, illustration, etc.  Object name is nihms-1582270-f0005.jpg

Infants can use their object knowledge to disambiguate perceptual input. Top panel, Quinn and Schyns (2003) gave 3- to 4-month-old infants' experience with a pac-man shape (left). After this experience, infants used this pac-man knowledge to override their Gestalt perceptual-grouping principles when receiving ambiguous experience (middle) as evidenced by infants exhibiting a novelty preference for the circle at test. This novelty preference is a marked departure from looking preferences without that prior pac-man experience. Bottom panel, 8.5-month-old infants were able to employ their object knowledge of a key ring to generalize to an ambiguous display as evidenced by longer looking to the "move-apart" event compared to the "move-together" event. Consistent with the view that infants are using their object knowledge to disambiguate this scene, when a similar object is presented that does not fit the category of a key ring, infants looked longer at the "move-together" event. Figures reproduced with permission.

Specifically, this was exhibited by a novelty preference for the circle but only in the presence of the prior experience with the pac-man shape.

Similar to the findings in the context of speech segmentation (Bortfeld et al., 2005), these results suggest that infants are able to apply previously formed knowledge to change their bottom-up perception of their sensory input. Again, the bottom-up alternative explanation for this result is that simple repetition or increased frequency results in biases in bottom-up perception. Quinn, Schyns, and Goldstone (2006) conducted a follow-up study to address this alternative explanation by providing equal experience of the pac-man and the circle shapes, and continued to find evidence suggesting that infants were using their object knowledge to change their bottom-up perceptual propensities. Moreover, this work is supported by a computational model (Goldstone, 2000) that requires top-down influences from the categorization layer to the perceptual (detector) layer in order for these effects to occur.

Similar findings on the effect of object knowledge and categorization on vision have been presented by Needham and colleagues. Across numerous studies using the same task shown in Fig. 5, this group has demonstrated that 4.5-month-old infants are able to use prior experience and object knowledge to segment an ambiguous visual scene. Young infants used this knowledge even 24 h after initial exposure, in a different context (Needham & Baillargeon, 1998), to generalize from variable experience with different perceptual characteristics to the ambiguous scene (Needham, Dueker, & Lockhead, 2005). Infants were also able to generalize when the recent experience was of the box in a different orientation as shown in Fig. 5 (Needham & Modi, 1999). Finally, Needham, Cantlon, and Ormsbee Holley (2006) showed that older infants (8.5 months) use the features of a familiar category (keys) to disambiguate novel exemplars of this category. Specifically, the more similar the novel visual scene was to keys, the longer infants looked when the ring was moved independently from the keys, demonstrating that they are segmenting the scene consistent with the familiar category (see Fig. 5).

These results demonstrate that infants are able to use their object knowledge to disambiguate new visual input, which is similar to findings that infants can use a familiar word to segment fluent speech (Bortfeld et al., 2005). As supported by additional research that controls for differences in the frequency of sensory experience with each exemplar, both Quinn et al. (2006) and computational models point to a role of top-down processes in these phenomena (Goldstone, 2000). These studies provide convergent evidence that infants can employ top-down processes when using visual experience to support changes in perception.

4.2.3. Generalizing From Variable Perceptual Experience

An interesting detail in Needham's work on how experience supports changes in infants' scene segmentation is that variability of experience is essential to infants' generalization of their past experience (Needham et al., 2005; Needham & Modi, 1999). This finding is relevant to mediating between bottom-up and top-down accounts. If infants were using bottom-up mechanisms, prior experience could work as a prime where previously activated representations of a given object in prior experience would lead to better processing of that object in the future. In this case, perceptual similarity between previous experience and future input would be essential, and Needham and Modi (1999) reported that if infants are familiarized with a single box before test that this box must be extremely similar to the box used during test. This finding is consistent with what would be expected by bottom-up mechanisms. However, Needham et al. (2005) reported that when infants were exposed to three different boxes, none of these boxes needed to be as similar as the box needed in the single-exposure case. This is consistent with top-down models where a broader, higher-level representation is being formed through variable experience and is feeding back to affect perceptual processing. Since variability of prior experience seems to free infants from the constraints of perceptual similarity, this suggests that infants could employ either bottom-up or top-down mechanisms, and the characteristics of their experience (e.g., how variable it is) help determine which strategy is used.

Specifically, in the case where infants have prior experience with a single box and require strong perceptual similarity, they are likely employing a bottom-up method. However, when infants are permitted variable experience, they do not have the same rigid need for perceptual similarity and are likely employing a top-down method. Variable experience is likely allowing infants to form a higher-level perceptual category of the box that can be generalized to more dissimilar test instances.

Similar findings have been reported by Quinn and Bhatt (2005). These authors examined a type of Gestalt grouping principle, grouping by similarity, that is beyond the reach of 3- to 4-month-old infants. Across three experiments, these researchers investigated three different types of scenes, each composed of different elements that infants could distinguish and yet, infants consistently failed at this perceptual task (Fig. 6, left panel). However, the authors found that exposing infants to a combination of all three scene types resulted in successful perceptual performance (Fig. 6, right panel). In this case, variability allows infants to perceive a higher-level representation in the scene that longer exposure to a single scene type did not allow. This finding can be explained by both bottom-up and top-down accounts, and the authors did not provide any additional evidence to support either view, but the strong parallel with Needham's findings supports a top-down account in this case as well. As infants receive this variable exposure, they are able to form a higher-level representation. In the top-down account, this higher-level representation would feedback to earlier perceptual areas to change infants' perceptions of future sensory input.

An external file that holds a picture, illustration, etc.  Object name is nihms-1582270-f0006.jpg

Young infants generalize based on variable visual experience. For example, Quinn and Bhatt (2005) found that when 3- to 4-month-old infants experienced one of three contours constructed from different elements (i.e., requiring infants to use the Gestalt perceptual-grouping principle of similarity), they could not generalize this experience to an unambiguous contour at test. This indicates that they could not perceive the contour using these elements. However, if infants were exposed to all three of these contours, they could generalize at test. Thus, variable experience supported better perception than more experience with a single type of stimulus. Figures reproduced with permission.

In both of these examples it is not known what systems are integrating across these variable experiences to give rise to changes in perception. However, convergent findings from adult learning studies suggest that higher-level learning and memory systems might be involved. The role of these higher-level systems provides additional evidence that the use of variable experiences to change future perception might be mediated by top-down mechanisms. As reviewed previously, Emberson and Amso (2012) presented adults with variable experience with a novel object. At the start of the experiment, adults were not able to perceive the novel object in an ambiguous scene. After familiarization to this object when it was embedded into three additional scenes, half of the adults changed their perception and detected the novel object, and the other half persisted with their nitial percept. fMRI recordings taken during exposure revealed that the systems which differentiated the adults who used their variable experience to learn and those who did not were higher-level learning and memory systems. Specifically, those adults who changed their perception activated the hippocampus and the caudate (basal ganglia) during their variable experiences, suggesting that these systems are involved in building the object representation necessary to change future perception of this object. While it is possible that infants employ different neural systems to do this compared to adults, it is unlikely that their perceptual systems have the memory capacity to compare across the numerous experiences necessary to make these comparisons. Future methodological advances that will allow more sophisticated neuroimaging with infants will be helpful in answering these questions.

Finally, while there are no clear auditory analogues to these visual findings, there is some suggestion that infants can assimilate across variable experience to learn about the structure of their environment and again, the amount of variability matters. Specifically, more variability produces greater learning. Graf Estes and Lew-Williams (2015) found that infants can perform statistical learning across variable speakers but fail if there are only a couple of speakers present during familiarization. Gómez (2002) found that infants can learn from nonadjacent dependencies in an artificial grammar when the non-relevant items are highly variable. While it is not known whether these instances of learning from auditory variability result in top-down changes in perception, this is an interesting area for future research.

4.3. Neuroanatomical Evidence for Top-Down Modulation in Infancy

While there is strong evidence from both behavioral and neuroimaging studies that young infants have the capacity for top-down modulation of perceptual systems based on learning, experience, and expectation, infants have remarkably poor overall neural connectivity. Neural connectivity is the means by which information is transmitted between systems of the brain. The clear presence of top-down neuroanatomical connections throughout in the mature, adult brain (e.g., within perceptual systems and between cognitive and perceptual systems) is a strong piece of evidence in favor of top-down mechanisms later in life (see Fig. 2). However, these connections are remarkably weak early in development. Though it must be noted that investigations into neural connectivity are both much sparser in young developmental populations and more difficult to ascertain as the long-range connections in the brain are not yet myelinated, which is a necessary precondition for the detection of these connections using popular, noninvasive techniques, such as diffusion tensor imaging (DTI). However, there are three major sources of information about neural connectivity early in development: two types of noninvasive measures in humans: (1) structural connectivity using DTI, (2) functional connectivity (using fMRI), and (3) results from animal models of development. This section reviews each of these literatures in turn.

Studies of structural and functional connectivity in human infants reveal large developmental changes in the connectivity of the brain through the first years of life. In general, the brain starts out less interconnected and becomes more connected with age (Ball et al., 2014; Fair et al., 2009; Gao et al., 2011; Sasai et al., 2012; Smyser et al., 2010; Tymofiyeva et al., 2013; Yap et al., 2011). However, there is controversy with respect to how connected or disconnected the young brain is (i.e., how local or global). Some researchers claim that large-scale networks, similar to adults, exist at birth (Ball et al., 2014; Doria et al., 2010), while others report evidence of only local, proximity-driven networks in young infants (Fransson, Aden, Blennow, & Lagercrantz, 2011; Fransson et al., 2007; Gao et al., 2011; Sasai et al., 2012; Smyser et al., 2010; Tymofiyeva et al., 2013; Yap et al., 2011). Regardless of how interconnected or local the young brain is, there is clear evidence of a dramatic increase in the connectivity of the brain with development with the brain becoming more interconnected and creating more long-range, global connections. There are convergent findings in the measures of both functional connectivity and structural connectivity of this direction of development (Fig. 7).

An external file that holds a picture, illustration, etc.  Object name is nihms-1582270-f0007.jpg

Increases in both global connectivity and long-range connections in early development. Left panel from Tymofiyeva et al. (2013) presents (top row) structure MR images of neonates, 6-month olds, and adults, (middle row) images of long-range fiber tracks identified using diffusion tensor imaging (DTI) with notable differences in connection density between age groups, and (bottom row) brain networks represented as weighted graphs. Each size of each node represents the proportional node degree, or how many connections this area of the brain has. A large node will be a network hub. Again, there are notable differences in the relative size of the nodes and the distribution of connections between the nodes. Right panel from Gao et al. (2011) presents a graphical representation of the neonate and 2-year-old brain with nodes shaded by lobes. Neonate connections are largely local and nodes within each lobe are clustered together; however, by age 2 there are networks that span lobes as seen by less clear clustering by lobes. Moreover, there is a clear increase in the distance between connections as visualized in the graph. Distance is normalized to the size of the brain at each age. Figures reproduced with permission.

Animal models of neural connectivity early in development are also relatively sparse. However, existing research on the development of cortico-cortical interactions in nonhuman primates suggests that, within the occipital lobe, there exist both feedback and feedforward connections early in development. However, whereas some aspects of a visual hierarchy exist early, there is a dominance of feedforward connections. With visual experience, the balance of these types of connections shifts with both an increase in feedback connections and a decrease in feedforward connections (Price et al., 2006).

Interestingly, the development of long-range connections within the occipital lobe, in cats, emerges from the more local connections present early in development (Kaschube, Schnabel, Wolf, & Löwel, 2009). Findings like these again suggest that early in development neuroanatomical connections are readily available for feedforward, bottom-up processing and there is a development of feedback, top-down, and global connections.

Even though there is a clear developmental bias toward local, feedforward connections early in development compared with adults, and the development of global connections occurs well into adolescence (Fair et al., 2009), this does not mean that young infants cannot use their relatively weak global connections to exert top-down influences on their perceptual systems. It is possible that contexts where top-down influences are seen in neuroimaging and behavioral studies are part of the developmental mechanism by which these global connections are built. Neuroimaging work with adults suggests that learning and predictability can rapidly modify connectivity (in this case functional) between disparate brain regions (den Ouden et al., 2010).

5. CONCLUSIONS AND FUTURE DIRECTIONS

Perception is where cognition and reality meet (Neisser, 1976). What is the nature of that meeting? According to a bottom-up view of perception and perceptual development, it is a unidirectional meeting where reality walks through a passive perceptual system, and perception is simply a turnstile to the rest of the brain. Alternatively, perception could be the place where cognition and sensory input rendezvous. It could be a place where cognition makes guesses about the nature of reality, and reality reveals itself.

This latter characterization of perception, as a nexus of bottom-up information coming from the external world and top-down information coming from the rest of the brain, is receiving a great deal of support and interest by those studying mature perceptual systems. Top-down processes have been found to affect adult perception and guide perceptual learning. However, the possible role of top-down processes in the development of perception has received little attention. Instead, it has been largely assumed that experience supports perceptual development in a purely bottom-up fashion.

In this chapter I have argued that a strictly bottom-up model is not sufficient to explain known phenomena in perceptual development nor how a highly top-down adult perceptual system develops. Moreover, there is already evidence that infants as young as 3 months engage in top-down modulation of their perceptual systems when they are given the opportunity to predict or learn from sensory input. While this chapter argues for the early availability of top-down processes, further work is needed to investigate the extent and efficacy of these abilities.

It is also clear that even if top-down information is available early, these abilities are not mature early in life. However, that immaturity does not mean that top-down influences cannot play a role in shaping development. It has been well established that some types of top-down processes are developing well into adolescence (Hwang, Velanova, & Luna, 2010). This is one sense of the term top-down (i.e., inhibiting responses to salient stimuli), and other types of top-down information with other functions or involving other systems (e.g., learning/memory systems) could have different developmental time courses, be available earlier, and be more ecologically relevant for younger developing populations.

Certainly, the higher-level systems that would be the origin of a top-down influence are developing themselves (e.g., attention, memory, executive function), so either the nature or the efficacy of the information they can feedback will change across development. Moreover, neural connections between regions of the brain exhibit a protracted developmental trajectory and better connections will enable more effective feedback. Even though both higher-level systems that can originate top-down signals and the connections between higher- and lower-level systems are developing, this does not mean that top-down information is not exhibiting feedback control over earlier systems. It could be the use of these nascent, top-down connections that help to establish the highly top-down adult perceptual system and global, distributed networks that characterize the adult brain.

Finally, it is also likely that top-down and bottom-up processes are highly interdependent within development. Amso and Scerif (2015) hypothesize that "with visual development, there is an increase in feedforward information competing for attention allocation in higher-level regions, thus linking top-down visual attention development with visual experience. In turn, these regions, now engaged, send top-down signals to begin to tune local visual areas, setting the hierarchical loops in motion from very early in the first postnatal year" (p. 609). For this reason, it is important for future work to balance the general assumption of bottom-up models of perceptual development with investigations into the role of top-down processes in perceptual development.

ACKNOWLEDGMENTS

This work was supported by NICHD 4R00HD076166-02.

Footnotes

aNelson, Ellis, Collins, and Lang (1990) also examined infant neural responses to missed or unexpectedly omitted stimuli in a cross-modal cueing paradigm where an auditory event predicts a visual event. This study employed ERPs with 6-month olds and found that in trials where the predicted visual stimulus was unexpectedly omitted, there was no evidence of a surprise component but ERPs were modulated to the presented visual stimulus in the subsequent trial. There are a number of differences between these paradigms that might explain the discrepancy in the findings. For example, the Emberson et al.'s (2015) procedure has auditory cues overlapping with the start of the visual presentation which will likely produce stronger cross-modal learning and may provide more opportunity for top-down signals to be produced and detected. Moreover, it is possible that there are differences in the detection of top-down signals in ERPs and fNIRS; however, similar responses to an unexpected auditory omission have been reported using intracranial ERPs in adults (Hughes et al., 2001). These are important avenues for future investigation.

bIt is unclear why Emberson et al. (n.d) and Emberson et al. (2015) found evidence of perceptual system modulation during repetition/variability and unexpected omissions, respectively, and Nakano et al. (2009) did not. One possibility is that the 3-month olds studied by Nakano et al. (2009) are too young to engage in this kind of top-down modulation or sleeping infants are unable to do so. It could be that the task does not sufficiently encourage infants to predict upcoming sensory input so top-down signals are not communicated back to the temporal lobe. Future work is needed to investigate these possibilities.

REFERENCES

  • Ackles PK (2008). Stimulus novelty and cognitive-related ERP components of the infant brain. Perceptual and Motor Skills, 106(1), 3–20. [PubMed] [Google Scholar]
  • Ahissar M, & Hochstein S (2004). The reverse hierarchy theory of visual perceptual learning. Trends in Cognitive Sciences, 8(10), 457–464. arXiv: 9605103 [cs]. [PubMed] [Google Scholar]
  • Amso D, & Scerif G (2015). The attentive brain: Insights from developmental cognitive neuroscience. Nature Reviews. Neuroscience, 16(10), 606–619. [PMC free article] [PubMed] [Google Scholar]
  • Arteberry ME, & Kellman PJ (2016). Development of perception in infancy: The cradle of knowledge revisited. New York: Oxford University Press. [Google Scholar]
  • Awh E, Belopolsky AV, & Theeuwes J (2012). Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in Cognitive Sciences, 16(8), 437–443. [PMC free article] [PubMed] [Google Scholar]
  • Ball G, Aljabar P, Zebari S, Tusor N, Arichi T, Merchant N, … Counsell SJ (2014). Rich-club organization of the newborn human brain. Proceedings of the National Academy of Sciences of the United States of America, 111(20), 7456–7461. [PMC free article] [PubMed] [Google Scholar]
  • Bar M (2003). A cortical mechanism for triggering top-down facilitation in visual object recognition. Journal of Cognitive Neuroscience, 15(4), 600–609. [PubMed] [Google Scholar]
  • Bar M, Tootell RBH, Schacter DL, Greve DN, Fischl B, Mendola JD, … Dale AM (2001). Cortical mechanisms specific to explicit visual object recognition. Neuron, 29(2), 529–535. [PubMed] [Google Scholar]
  • Basirat A, Dehaene S, & Dehaene-Lambertz G (2014). A hierarchy of cortical responses to sequence violations in three-month-old infants. Cognition, 132(2), 137–150. [PubMed] [Google Scholar]
  • Bergelson E, & Swingley D (2012). At 6–9 months, human infants know the meanings of many common nouns. Proceedings of the National Academy of Sciences of the United States of America, 109(9), 3253–3258. [PMC free article] [PubMed] [Google Scholar]
  • Bortfeld H, Morgan JL, Golinkoff RM, & Rathbun K (2005). Mommy and Me: Familiar names help launch babies into speech-stream segmentation. Psychological Science, 16(4), 298–304. [PMC free article] [PubMed] [Google Scholar]
  • Bosworth RG, & Dobkins KR (2009). Chromatic and luminance contrast sensitivity in fullterm and preterm infants. Journal of Vision, 9(13), 1–16. [PMC free article] [PubMed] [Google Scholar]
  • Clark A (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and Brain Sciences, 36(3), 181–204. [PubMed] [Google Scholar]
  • Conklin HM, Luciana M, Hooper CJ, & Yarger RS (2007). Working memory performance in typically developing children and adolescents: Behavioral evidence of protracted frontal lobe development. Developmental Neuropsychology, 31(1), 103–128. [PubMed] [Google Scholar]
  • den Ouden HEM, Daunizeau J, Roiser J, Friston KJ, & Stephan KE (2010). Striatal prediction error modulates cortical coupling. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 30(9), 3210–3219. [PMC free article] [PubMed] [Google Scholar]
  • Dobkins KR, & Mccleery JP (2009). Effects of gestational length, gender, postnatal age, and birth order on visual contrast sensitivity in infants. Journal of Vision, 9(10), 1–21. [PMC free article] [PubMed] [Google Scholar]
  • Doria V, Beckmann CF, Arichi T, Merchant N, Groppo M, Turkheimer FE, … Edwards a. D. (2010). Emergence of resting state networks in the preterm human brain. Proceedings of the National Academy of Sciences of the United States of America, 107(46), 20015–20020. [PMC free article] [PubMed] [Google Scholar]
  • Emberson LL, & Amso D (2012). Learning to sample: Eye tracking and fMRI indices of changes in object perception. Journal of Cognitive Neuroscience, 24(10), 2030–2042. [PubMed] [Google Scholar]
  • Emberson LL, Cannon G, Palmeri H, Richards JE, & Aslin RN (in press). Using fNIRS to examine occipital and temporal responses to stimulus repetition in young infants: Evidence of selective frontal cortex involvement. Developmental Cognitive Neuroscience. arXiv:1011.1669v3. 10.1016/j.dcn.2016.11.002. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Emberson LL, Richards JE, & Aslin RN (2015). Top-down modulation in the infant brain: Learning-induced expectations rapidly affect the sensory cortex at 6-months. Proceedings of the National Academy of Sciences of the United States of America, 112(31), 9585–9590. [PMC free article] [PubMed] [Google Scholar]
  • Fair DA, Cohen AL, Power JD, Dosenbach NUF, Church JA, Miezin FM, … Petersen SE (2009). Functional brain networks develop from a "local to distributed" organization. PLoS Computational Biology, 5(5), 14–23. [PMC free article] [PubMed] [Google Scholar]
  • Feldman NH, Myers EB, White KS, Griffiths TL, & Morgan JL (2013). Word-level information influences phonetic learning in adults and infants. Cognition, 127(3), 427–438. [PMC free article] [PubMed] [Google Scholar]
  • Firestone C, & Scholl B (2016). Cognition does not affect perception: Evaluating the evidence for 'top-down' effects. Behavioral and Brain Sciences, 39, 1–77. 10.1017/S0140525X15000965. [PubMed] [CrossRef] [Google Scholar]
  • Fransson P, Aden U, Blennow M, & Lagercrantz H (2011). The functional architecture of the infant brain as revealed by resting-state fMRI. Cerebral Cortex (New York, N.Y.: 1991), 21(1), 145–154. [PubMed] [Google Scholar]
  • Fransson P, Skiöld B, Horsch S, Nordell A, Blennow M, Lagercrantz H, & Aden U (2007). Resting-state networks in the infant brain. Proceedings of the National Academy of Sciences of the United States of America, 104(39), 15531–15536. [PMC free article] [PubMed] [Google Scholar]
  • Friston K (2005). A theory of cortical responses. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 360(1456), 815–836. [PMC free article] [PubMed] [Google Scholar]
  • Gao W, Gilmore JH, Giovanello KS, Smith JK, Shen D, Zhu H, & Lin W (2011). Temporal and spatial evolution of brain network topology during the first two years of life. PLoS One, 6(9)e25278. [PMC free article] [PubMed] [Google Scholar]
  • Gilbert CD, & Li W (2013). Top-down influences on visual processing. Nature Reviews. Neuroscience, 14, 350–363. [PMC free article] [PubMed] [Google Scholar]
  • Gogtay N, Giedd JN, Lusk L, Hayashi KM, Greenstein D, Vaituzis a. C., … Thompson PM (2004). Dynamic mapping of human cortical development during childhood through early adulthood. Proceedings of the National Academy of Sciences of the United States of America, 101(21), 8174–8179. [PMC free article] [PubMed] [Google Scholar]
  • Goldstone RL (2000). A neural network model of concept-influenced segmentation. In Proceedings of the twenty-second annual conference of the cognitive science society (pp. 172–177). [Google Scholar]
  • Gómez RL (2002). Variability and detection of invariant structure. Psychological Science, 13(5), 431–436. [PubMed] [Google Scholar]
  • Graf Estes K, & Lew-Williams C (2015). Listening through voices: Infant statistical word segmentation across multiple speakers. Developmental Psychology, 51(11), 1–12. [PMC free article] [PubMed] [Google Scholar]
  • Grossmann T (2013). Mapping prefrontal cortex functions in human infancy. Infancy, 18(3), 303–324. [Google Scholar]
  • Grossmann T, Missana M, Friederici AD, & Ghazanfar AA (2012). Neural correlates of perceptual narrowing in cross-species face-voice matching. Developmental Science, 15(6), 830–839. [PubMed] [Google Scholar]
  • Hadley H, Pickron CB, & Scott LS (2014). The lasting effects of process-specific versus stimulus-specific learning during infancy. Developmental Science, 5, 1–11. [PubMed] [Google Scholar]
  • Hughes HC, Darcey TM, Barkan HI, Williamson PD, Roberts DW, & Aslin CH (2001). Responses of human auditory association cortex to the omission of an expected acoustic event. NeuroImage, 1089, 1073–1089. [PubMed] [Google Scholar]
  • Hutchinson JB, & Turk-Browne NB (2012). Memory-guided attention: Control from multiple memory systems. Trends in Cognitive Sciences, 16(12), 576–579. [PMC free article] [PubMed] [Google Scholar]
  • Hwang K, Velanova K, & Luna B (2010). Strengthening of top-down frontal cognitive control networks underlying the development of inhibitory control: A functional magnetic resonance imaging effective connectivity study. Journal of Neuroscience, 30(46), 15535–15545. [PMC free article] [PubMed] [Google Scholar]
  • Kaschube M, Schnabel M, Wolf F, & Löwel S (2009). Interareal coordination of columnar architectures during visual cortical development. Proceedings of the National Academy of Sciences of the United States of America, 106(40), 17205–17210. arXiv: 0801.4164. [PMC free article] [PubMed] [Google Scholar]
  • Kelly DJ, Quinn PC, Slater AM, Lee K, Ge L, & Pascalis O (2007). The other-race effect develops during infancy: Evidence of perceptual narrowing. Psychological Science, 18(12), 1084–1089. [PMC free article] [PubMed] [Google Scholar]
  • Kok P, Failing MF, & de Lange FP (2014). Prior expectations evoke stimulus templates in the primary visual cortex. Journal of Cognitive Neuroscience, 26(7), 1546–1554. [PubMed] [Google Scholar]
  • Kouider S, Long B, Stanc LL, Charron S, Fievet AC, Barbosa LS, & Gelskov SV (2015). Neural dynamics of prediction and surprise in infants. Nature Communications, 6, 1–8. [PMC free article] [PubMed] [Google Scholar]
  • Kouider S, Stahlhut C, Gelskov SV, Barbosa LS, Dutat M, de Gardelle V, … Dehaene-Lambertz G (2013). A neural marker of perceptual consciousness in infants. Science, 340(6130), 376–380. [PubMed] [Google Scholar]
  • Latawiec D, Martin KAC, & Meskenaite V (2000). Termination of the geniculocortical projection in the striate cortex of macaque monkey: A quantitative immunoelectron microscopic study. Journal of Comparative Neurology, 419(3), 306–319. [PubMed] [Google Scholar]
  • Lewkowicz DJ, & Ghazanfar AA (2006). The decline of cross-species intersensory perception in human infants. Proceedings of the National Academy of Sciences of the United States of America, 103(17), 6771–6774. [PMC free article] [PubMed] [Google Scholar]
  • Li W, Piëch V, & Gilbert CD (2004). Perceptual learning and top-down influences in primary visual cortex. Nature Neuroscience, 7(6), 651–657. [PMC free article] [PubMed] [Google Scholar]
  • Lupyan G (2015). Cognitive penetrability of perception in the age of prediction: Predictive systems are penetrable systems. Review of Philosophy and Psychology, 6(4), 547–569. [Google Scholar]
  • Lupyan G, & Spivey MJ (2008). Perceptual processing is facilitated by ascribing meaning to novel stimuli. Current Biology, 18(10), R410–R412. [PubMed] [Google Scholar]
  • Lupyan G, & Ward EJ (2013). Language can boost otherwise unseen objects into visual awareness. Proceedings of the National Academy of Sciences of the United States of America, 110(35), 14196–14201. [PMC free article] [PubMed] [Google Scholar]
  • Mandel DR, Jusczyk PW, & Pisoni DB (1995). Infants' recognition of the sound patterns of their own names. Psychological Science, 6(5), 314–317. [PMC free article] [PubMed] [Google Scholar]
  • Markant J, Oakes LM, & Amso D (2016). Visual selective attention biases contribute to the other-race effect among 9-month-old infants. Developmental Psychobiology, 58(3), 355–365. [PMC free article] [PubMed] [Google Scholar]
  • Maurer D, Lewis TL, & Mondloch CJ (2005). Missing sights: Consequences for visual cognitive development. Trends in Cognitive Sciences, 9(3), 144–151. [PubMed] [Google Scholar]
  • Maurer D, & Werker JF (2014). Perceptual narrowing during infancy: A comparison of language and faces. Developmental Psychobiology, 56(2), 154–178. [PubMed] [Google Scholar]
  • Nakano T, Homae F, Watanabe H, & Taga G (2008). Anticipatory cortical activation precedes auditory events in sleeping infants. PLoS One, 3(12), e3912. [PMC free article] [PubMed] [Google Scholar]
  • Nakano T, Watanabe H, Homae F, & Taga G (2009). Prefrontal cortical involvement in young infants' analysis of novelty. Cerebral Cortex, 19(2), 455–463. [PubMed] [Google Scholar]
  • Needham A, & Baillargeon R (1998). Effects of prior experience on 4.5-month-old infants' object segregation. Infant Behavior & Development, 21(1), 1–24. [Google Scholar]
  • Needham A, Cantlon JF, & Ormsbee Holley SM (2006). Infants' use of category knowledge and object attributes when segregating objects at 8.5 months of age. Cognitive Psychology, 53(4), 345–360. [PubMed] [Google Scholar]
  • Needham A, Dueker G, & Lockhead G (2005). Infants' formation and use of categories to segregate objects. Cognition, 94(3), 215–240. [PubMed] [Google Scholar]
  • Needham A, & Modi A (1999). Infants' use of prior experiences with objects in object segregation: Implications for object recognition in infancy. Advances in Child Development and Behavior, 27, 99–133. [PubMed] [Google Scholar]
  • Neisser U (1976). Cognition and reality: Principles and implications of cognitive psychology. New York: WH Freeman & Co. [Google Scholar]
  • Nelson CA, Ellis AE, Collins PF, & Lang SF (1990). Infants' neuroelectric responses to missing stimuli: Can missing stimuli be novel stimuli? Developmental Neuropsychology, 6(4), 339–349. [Google Scholar]
  • Neville H, & Bavelier D (2002). Human brain plasticity: Evidence from sensory deprivation and altered language experience. Progress in Brain Research, 138, 177–188. [PubMed] [Google Scholar]
  • Panichello MF, Cheung OS, & Bar M (2013). Predictive feedback and conscious visual experience. Frontiers in Psychology, 3, 620. [PMC free article] [PubMed] [Google Scholar]
  • Pascalis O, Scott LS, Kelly DJ, Shannon RW, Nicholson E, Coleman M, & Nelson CA (2005). Plasticity of face processing in infancy. Proceedings of the National Academy of Sciences of the United States of America, 102(14), 5297–5300. [PMC free article] [PubMed] [Google Scholar]
  • Peña M, Pittaluga E, & Mehler J (2010). Language acquisition in premature and full-term infants. Proceedings of the National Academy of Sciences of the United States of America, 107(8), 3823–3828. [PMC free article] [PubMed] [Google Scholar]
  • Polley DB, Steinberg EE, & Merzenich MM (2006). Perceptual learning directs auditory cortical map reorganization through top-down influences. Journal of Neuroscience, 26(18), 4970–4982. [PMC free article] [PubMed] [Google Scholar]
  • Price DJ, Kennedy H, Dehay C, Zhou L, Mercier M, Jossin Y, … Molnár Z (2006). The development of cortical connections. European Journal of Neuroscience, 23(4), 910–920. [PubMed] [Google Scholar]
  • Quinn PC, & Bhatt RS (2005). Learning perceptual organization in infancy. Psychological Science, 16(7), 511–515. [PubMed] [Google Scholar]
  • Quinn PC, Brown CR, & Streppa ML (1997). Perceptual organization of complex visual configurations by young infants. Infant Behavior & Development, 20(1), 35–46. [Google Scholar]
  • Quinn PC, & Schyns PG (2003). What goes up may come down: Perceptual process and knowledge access in the organization of complex visual patterns by young infants. Cognitive Science, 27(6), 923–935. [Google Scholar]
  • Quinn PC, Schyns PG, & Goldstone RL (2006). The interplay between perceptual organization and categorization in the representation of complex visual patterns by young infants. Journal of Experimental Child Psychology, 95(2), 117–127. [PubMed] [Google Scholar]
  • Rao RPN, & Ballard DH (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. [PubMed] [Google Scholar]
  • Roelfsema PR, van Ooyen A, & Watanabe T (2010). Perceptual learning rules based on reinforcers and attention. Trends in Cognitive Sciences, 14(2), 64–71. [PMC free article] [PubMed] [Google Scholar]
  • Sasai S, Homae F, Watanabe H, Sasaki AT, Tanabe HC, Sadato N, & Taga G (2012). A NIRS-fMRI study of resting state network. NeuroImage, 63(1), 179–193. [PubMed] [Google Scholar]
  • Scott LS, & Monesson A (2009). The origin of biases in face perception. Psychological Science, 20(6), 676–680. [PubMed] [Google Scholar]
  • Scott LS, Pascalis O, & Nelson C. a. (2007). A domain-general theory of the development of perceptual discrimination. Current Directions in Psychological Science, 16(4), 197–201. [PMC free article] [PubMed] [Google Scholar]
  • Seitz AR, Nanez JE, Holloway S, Tsushima Y, & Watanabe T (2006). Two cases requiring external reinforcement in perceptual learning. Journal of Vision, 6(9), 966–973. [PubMed] [Google Scholar]
  • Seitz A, & Watanabe T (2005). A unified model for perceptual learning. Trends in Cognitive Sciences, 9(7), 329–334. [PubMed] [Google Scholar]
  • Smyser CD, Inder TE, Shimony JS, Hill JE, Degnan AJ, Snyder AZ, & Neil JJ (2010). Longitudinal analysis of neural network development in preterm infants. Cerebral Cortex (New York, N.Y.: 1991), 20(12), 2852–2862. [PMC free article] [PubMed] [Google Scholar]
  • Stager CL, & Werker JF (1997). Infants listen for more phonetic detail in speech perception than in word-learning tasks. Nature, 388(6640), 381–382. [PubMed] [Google Scholar]
  • Stokes M, Atherton K, Patai EZ, & Nobre AC (2012). Long-term memory prepares neural activity for perception. Proceedings of the National Academy of Sciences of the United States of America, 109(6), E360–E367. [PMC free article] [PubMed] [Google Scholar]
  • Stuss DT, & Knight RT (2013). Principles of frontal lobe function. In (2nd ed.). New York: Oxford University Press. [Google Scholar]
  • Summerfield C, & de Lange FP (2014). Expectation in perceptual decision making: Neural and computational mechanisms. Nature Reviews. Neuroscience, 15(October), 745–756. [PubMed] [Google Scholar]
  • Summerfield C, & Egner T (2009). Expectation (and attention) in visual cognition. Trends in Cognitive Sciences, 13(9), 403–409. [PubMed] [Google Scholar]
  • Summerfield C, Egner T, Greene M, Koechlin E, Mangels J, & Hirsch J (2006). Predictive codes for forthcoming perception in the frontal cortex. Science, 314(November), 1311–1314. [PubMed] [Google Scholar]
  • Summerfield C, & Koechlin E (2008). A neural representation of prior information during perceptual inference. Neuron, 59(2), 336–347. [PubMed] [Google Scholar]
  • Summerfield C, Trittschuh EH, Monti JM, Mesulam MM, & Egner T (2008). Neural repetition suppression reflects fulfilled perceptual expectations. Nature Neuroscience, 11(9), 1004–1006. [PMC free article] [PubMed] [Google Scholar]
  • Swingley D (2008). The roots of the early vocabulary in infants' learning from speech. Current Directions in Psychological Science, 17(5), 308–312. [PMC free article] [PubMed] [Google Scholar]
  • Thiessen ED (2007). The effect of distributional information on children's use of phone-mic contrasts. Journal of Memory and Language, 56(1), 16–34. [Google Scholar]
  • Tymofiyeva O, Hess CP, Ziv E, Lee PN, Glass HC, Ferriero DM, … Xu D (2013). A DTI-based template-free cortical connectome study of brain maturation. PLoS One, 8(5), 1–10. [PMC free article] [PubMed] [Google Scholar]
  • Wacongne C, Labyt E, van Wassenhove V, Bekinschtein T, Naccache L, & Dehaene S (2011). Evidence for a hierarchy of predictions and prediction errors in human cortex. Proceedings of the National Academy of Sciences of the United States of America, 108(51), 20754–20759. [PMC free article] [PubMed] [Google Scholar]
  • Werker JF, & Tees RC (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior & Development, 7(1), 49–63. [Google Scholar]
  • Wiesel TN, & Hubel DH (1963). Single-cell responses in striate cortex of kittens deprived of vision in one eye. Journal of Neurophysiology, 26, 1003–1017. [PubMed] [Google Scholar]
  • Wu R, Nako R, Band J, Pizzuto J, Ghoreishi Y, Scerif G, & Aslin RN (2015). Rapid attentional selection of non-native stimuli despite perceptual narrowing. Journal of Cognitive Neuroscience, 27(11), 2299–2307. arXiv: 1511.04103. [PMC free article] [PubMed] [Google Scholar]
  • Yamins DLK, Hong H, Cadieu CF, Solomon EA, Seibert D, & DiCarlo JJ (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 111(23), 8619–8624. arXiv: 0706.1062v1. [PMC free article] [PubMed] [Google Scholar]
  • Yap PT, Fan Y, Chen Y, Gilmore JH, Lin W, & Shen D (2011). Development trends of white matter connectivity in the first years of life. PLoS One, 6(9), e24678. [PMC free article] [PubMed] [Google Scholar]
  • Yeung HH, & Werker JF (2009). Learning words' sounds before learning how words sound: 9-Month-olds use distinct objects as cues to categorize speech information. Cognition, 113(2), 234–243. [PubMed] [Google Scholar]
  • Zevin JD, Yang J, Skipper JI, & McCandliss BD (2010). Domain general change detection accounts for "dishabituation" effects in temporal-parietal regions in functional magnetic resonance imaging studies of speech perception. Journal of Neuroscience, 30(3), 1110–1117. [PMC free article] [PubMed] [Google Scholar]

According to the Cognitive Perspective of Perceptual Development, Babies

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7294583/

0 Response to "According to the Cognitive Perspective of Perceptual Development, Babies"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel