一月下旬新内容速递丨地理智能、函数实战与新春启航

年末将至,智慧不停!一月下旬更新聚焦地理智能、函数实战、二次开发与新春趣味活动,助你在数据探索中持续突破!

一、图表应用精选

【地图】GIS地图:告别平面报表,激活你的业务“地理智能”》→学习GIS地图应用,实现业务数据与地理信息的深度融合。
【散点图】商业世界的“关系侦探”》→掌握散点图在商业分析中的实战应用,洞察变量间的隐藏关系。

二、二次开发视频更新

Excel导入模板扩展数据处理类》→如何让导入的“1”和“0”自动变成“是”和“否”

三、函数应用进阶

【函数课堂】Fixed :数据计算中的“定海神针”》→系统讲解Fixed函数的使用场景与技巧,助你掌握数据计算的稳定性关键。

四、插件更新

离线导出功能集成阿里云OSS》→新增离线导出至阿里云OSS功能,提升数据导出安全性与存储灵活性。

五、新年活动进行中

新年第②弹|新春知识擂台:智慧解码,喜迎新年!》→新春特别活动,智慧解码挑战,喜迎新年好运!

六、任务持续上线

【图表应用】GIS地图诊断市场盈亏,制定精准策略》→掌握GIS地图分析技能,精准诊断市场表现,助力策略制定。
【函数】Fixed函数实战任务》→深入Fixed函数实战应用,提升数据计算稳定性和精准度。
【图表应用】散点图:你的“广告效果侦查局”已上线!》→运用散点图分析广告效果,成为数据驱动的“侦查高手”。
【新年活动】智慧解码擂台:挑战你的数据脑力!》→参与数据解码挑战,激活你的逻辑思维与分析能力。


地理智能赋能业务,函数实战夯实基础,新春活动智趣相融——一月下旬,与数据共赴新年新征程!

麦粉社区
>
帖子详情

LEN函数怎么使用呢

数据分析 发表于 2023-4-23 11:56
发表于 2023-4-23 11:56:42

LEN([字段名])这个写法没问题吧,使用报错了


日志:


2023-04-23 11:51:02.152 [993] INFO launcher.DefaultLauncher.run:59 - Task start. (id:76fac8d9c16e6859656a643cd0db203e,name:DERIVE_COLUMN)
2023-04-23 11:51:02.163 [993] INFO repository.NodeStatusRepository.executeUpdate:138 - Report status successful.(state:RUNNING)
2023-04-23 11:51:02.170 [993] WARN sql.SparkSession$Builder.logWarning:69 - Using an existing SparkSession; some spark core configurations may not take effect.
2023-04-23 11:51:02.170 [993] INFO node.GenericNode.start:107 - Node start. (id:76fac8d9c16e6859656a643cd0db203e,name:DERIVE_COLUMN)
2023-04-23 11:51:02.203 [993] INFO datasources.InMemoryFileIndex.logInfo:57 - It took 3 ms to list leaf files for 1 paths.
2023-04-23 11:51:02.283 [993] INFO spark.SparkContext.logInfo:57 - Starting job: parquet at DatasetEvent.java:229
2023-04-23 11:51:02.325 [993] INFO scheduler.DAGScheduler.logInfo:57 - Job 77 finished: parquet at DatasetEvent.java:229, took 0.039619 s
2023-04-23 11:51:02.332 [993] INFO util.EventSerializeUtil.deserialize:106 - Deserialization event finished,took 0.162 s
2023-04-23 11:51:02.432 [993] ERROR node.GenericNode.handleExecuteError:148 - Node execution failed.(id:76fac8d9c16e6859656a643cd0db203e,name:DERIVE_COLUMN)
org.apache.spark.sql.AnalysisException: Undefined function: 'len'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 11
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.$anonfun$applyOrElse$121(Analyzer.scala:2108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:2108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:2099) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:318) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:318) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:323) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:323) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:94) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:132) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) ~[scala-library-2.12.10.jar:?]
        at scala.collection.immutable.List.foreach(List.scala:392) ~[scala-library-2.12.10.jar:?]
        at scala.collection.TraversableLike.map(TraversableLike.scala:238) ~[scala-library-2.12.10.jar:?]
        at scala.collection.TraversableLike.map$(TraversableLike.scala:231) ~[scala-library-2.12.10.jar:?]
        at scala.collection.immutable.List.map(List.scala:298) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:132) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:137) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:137) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:94) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:85) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:153) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:152) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$2(AnalysisHelper.scala:110) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:110) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:223) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:106) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:73) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:72) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveExpressions(AnalysisHelper.scala:152) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveExpressions$(AnalysisHelper.scala:151) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveExpressions(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:2099) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:2096) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:216) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.IndexedSeqOptimized.foldLeft(IndexedSeqOptimized.scala:60) ~[scala-library-2.12.10.jar:?]
        at scala.collection.IndexedSeqOptimized.foldLeft$(IndexedSeqOptimized.scala:68) ~[scala-library-2.12.10.jar:?]
        at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:38) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:213) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:205) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.immutable.List.foreach(List.scala:392) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:205) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:198) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:192) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:155) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:183) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:183) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:176) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:230) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:175) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at smartbix.datamining.engine.execute.node.preprocess.DeriveColumnNode.execute(DeriveColumnNode.java:67) ~[EngineCommonNode-1.0.jar:?]
        at smartbix.datamining.engine.execute.node.GenericNode.start(GenericNode.java:117) ~[EngineCore-1.0.jar:?]
        at smartbix.datamining.engine.agent.execute.executor.DefaultNodeExecutor.execute(DefaultNodeExecutor.java:43) ~[EngineAgent-1.0.jar:?]
        at smartbix.datamining.engine.agent.execute.launcher.DefaultLauncher.run(DefaultLauncher.java:67) ~[EngineAgent-1.0.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_202-ea]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202-ea]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_202-ea]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_202-ea]
        at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_202-ea]
2023-04-23 11:51:02.449 [993] INFO repository.NodeStatusRepository.executeUpdate:138 - Report status successful.(state:FAIL)
发表于 2023-4-23 13:37:55
当前什么使用场景下用到了len函数呢?看报错是数据库不支持len函数的使用

回复

使用道具 举报

高级模式
B Color Image Link Quote Code Smilies
您需要登录后才可以回帖 登录 | 立即注册

5回帖数 0关注人数 733浏览人数
最后回复于:2023-4-23 13:37

社区

指南

快速回复 返回顶部 返回列表