Questões de Concurso
Filtrar (abrir filtros)
386.569 Questões de concurso encontradas
386.569 resultados
Página 73818 de 77.314
Questões por página:
Cargo: Analista Judiciário - Área Judiciária
Ano: 2010
Atenção: As questões de números 1 a 10 referem-se ao texto seguinte.
Sobre o natural e o sobrenatural
Outro dia escrevi sobre a importância do não saber, de como o conhecimento avança quando parte do não saber, isto é, do senso de mistério que existe além do que se sabe.
A questão aqui é de atitude, de como fazer frente ao desconhecido. Existem duas alternativas: ou se acredita na capacidade da razão e da intuição humana (devidamente combinadas) em sobrepujar obstáculos e chegar a um conhecimento novo, ou se acredita que existem mistérios inescrutáveis, criados por forças além das relações de causa e efeito.
No meu livro Criação imperfeita, argumentei que a ciência jamais será capaz de responder a todas as perguntas. Sempre existirão novos desafios, questões que a nossa pesquisa e inventividade não são capazes de antecipar. Podemos imaginar o conhecido como sendo a região dentro de um círculo e o desconhecido como sendo o que existe fora do círculo. Não há dúvida de que à medida que a ciência avança o círculo cresce. Entendemos mais sobre o universo e entendemos mais sobre a mente. Mas, mesmo assim, o lado de fora do círculo continuará sempre lá. A ciência não é capaz de obter conhecimento sobre tudo o que existe no mundo. E por que isso? Porque, na prática, aprendemos sobre o mundo usando nossa intuição e instrumentos. Sem telescópios, microscópios e detectores de partículas, nossa visão de mundo seria mais limitada. Porém, tal como nossos olhos, essas máquinas têm limites.
Parafrasendo o poeta romano Lucrécio, as pessoas vivem aterrorizadas pelo que não podem explicar. Ser livre é poder refletir sobre as causas dos fenômenos sem aceitar cegamente “explicações inexplicáveis”, ou seja, explicações baseadas em causas além do natural.
Não é fácil ser coerente quando algo de estranho ocorre, uma incrível coincidência, a morte de um ente querido, uma premonição, algo que foge ao comum. Mas, como dizia o grande físico Richard Feynman, “prefiro não saber a ser enganado.” E você?
(Adaptado de Marcelo Gleiser, Folha de S. Paulo, 11/07/2010)
Cargo: Técnico Judiciário - Tecnologia da Informação
Ano: 2010
Atenção: As questões de números 53 a 60 referem-se ao texto abaixo.
Data mining
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with information extraction.
Data mining is the process of extracting patterns from data. Data mining is seen as an increasingly important tool by modern business to transform data into an informational advantage. It is currently used in a wide range of profiling practices, such as marketing, surveillance, fraud detection, and scientific discovery.
The related terms data dredging, data fishing and data snooping refer to the use of data mining techniques on sample portions of the larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered (see also data-snooping bias). These techniques can, however, be used in the creation of new hypotheses to test against the larger data populations.
[edit] Background
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology has increased data collection and storage. As data sets have grown in size and complexity, direct hands-on data analysis has increasingly been augmented with indirect, automatic data processing. This has been aided by other discoveries in computer science, such as neural networks, clustering, genetic algorithms (1950s), decision trees (1960s) and support vector machines (1980s). Data mining is the process of applying these methods to data with the intention of uncovering hidden patterns. It has been used for many years by businesses, scientists and governments to sift through volumes of data such as airline passenger trip records, census data and supermarket scanner data to produce market research reports. (Note, however, that reporting is not always considered to be data mining.)
A primary reason for using data mining is to assist in the analysis of collections of observations of behaviour. Such data are vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-)set(s) of data being analysed may not be representative of the whole domain, and [CONJUNCTION] may not contain examples of certain critical relationships and behaviours that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as Choice Modelling for humangenerated data. In these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design.
Cargo: Técnico Judiciário - Tecnologia da Informação
Ano: 2010
Atenção: As questões de números 53 a 60 referem-se ao texto abaixo.
Data mining
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with information extraction.
Data mining is the process of extracting patterns from data. Data mining is seen as an increasingly important tool by modern business to transform data into an informational advantage. It is currently used in a wide range of profiling practices, such as marketing, surveillance, fraud detection, and scientific discovery.
The related terms data dredging, data fishing and data snooping refer to the use of data mining techniques on sample portions of the larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered (see also data-snooping bias). These techniques can, however, be used in the creation of new hypotheses to test against the larger data populations.
[edit] Background
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology has increased data collection and storage. As data sets have grown in size and complexity, direct hands-on data analysis has increasingly been augmented with indirect, automatic data processing. This has been aided by other discoveries in computer science, such as neural networks, clustering, genetic algorithms (1950s), decision trees (1960s) and support vector machines (1980s). Data mining is the process of applying these methods to data with the intention of uncovering hidden patterns. It has been used for many years by businesses, scientists and governments to sift through volumes of data such as airline passenger trip records, census data and supermarket scanner data to produce market research reports. (Note, however, that reporting is not always considered to be data mining.)
A primary reason for using data mining is to assist in the analysis of collections of observations of behaviour. Such data are vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-)set(s) of data being analysed may not be representative of the whole domain, and [CONJUNCTION] may not contain examples of certain critical relationships and behaviours that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as Choice Modelling for humangenerated data. In these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design.
Cargo: Técnico Judiciário - Tecnologia da Informação
Ano: 2010
Atenção: As questões de números 53 a 60 referem-se ao texto abaixo.
Data mining
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with information extraction.
Data mining is the process of extracting patterns from data. Data mining is seen as an increasingly important tool by modern business to transform data into an informational advantage. It is currently used in a wide range of profiling practices, such as marketing, surveillance, fraud detection, and scientific discovery.
The related terms data dredging, data fishing and data snooping refer to the use of data mining techniques on sample portions of the larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered (see also data-snooping bias). These techniques can, however, be used in the creation of new hypotheses to test against the larger data populations.
[edit] Background
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology has increased data collection and storage. As data sets have grown in size and complexity, direct hands-on data analysis has increasingly been augmented with indirect, automatic data processing. This has been aided by other discoveries in computer science, such as neural networks, clustering, genetic algorithms (1950s), decision trees (1960s) and support vector machines (1980s). Data mining is the process of applying these methods to data with the intention of uncovering hidden patterns. It has been used for many years by businesses, scientists and governments to sift through volumes of data such as airline passenger trip records, census data and supermarket scanner data to produce market research reports. (Note, however, that reporting is not always considered to be data mining.)
A primary reason for using data mining is to assist in the analysis of collections of observations of behaviour. Such data are vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-)set(s) of data being analysed may not be representative of the whole domain, and [CONJUNCTION] may not contain examples of certain critical relationships and behaviours that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as Choice Modelling for humangenerated data. In these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design.
Cargo: Técnico Judiciário - Tecnologia da Informação
Ano: 2010
Atenção: As questões de números 53 a 60 referem-se ao texto abaixo.
Data mining
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with information extraction.
Data mining is the process of extracting patterns from data. Data mining is seen as an increasingly important tool by modern business to transform data into an informational advantage. It is currently used in a wide range of profiling practices, such as marketing, surveillance, fraud detection, and scientific discovery.
The related terms data dredging, data fishing and data snooping refer to the use of data mining techniques on sample portions of the larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered (see also data-snooping bias). These techniques can, however, be used in the creation of new hypotheses to test against the larger data populations.
[edit] Background
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology has increased data collection and storage. As data sets have grown in size and complexity, direct hands-on data analysis has increasingly been augmented with indirect, automatic data processing. This has been aided by other discoveries in computer science, such as neural networks, clustering, genetic algorithms (1950s), decision trees (1960s) and support vector machines (1980s). Data mining is the process of applying these methods to data with the intention of uncovering hidden patterns. It has been used for many years by businesses, scientists and governments to sift through volumes of data such as airline passenger trip records, census data and supermarket scanner data to produce market research reports. (Note, however, that reporting is not always considered to be data mining.)
A primary reason for using data mining is to assist in the analysis of collections of observations of behaviour. Such data are vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-)set(s) of data being analysed may not be representative of the whole domain, and [CONJUNCTION] may not contain examples of certain critical relationships and behaviours that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as Choice Modelling for humangenerated data. In these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design.