一月初新内容速递丨数据管控、图表应用与函数启航

新年伊始,学习继续!一月上旬更新聚焦数据安全、图表实战、函数入门与场景深化,助你在数据智能的旅程中稳健开年!

一、技术经验分享

精细化管控数据导出,让敏感数据无处泄露!→加强数据安全管理,有效防止敏感信息外泄,提升企业数据合规性。

二、任务持续上线

【图表应用】散点图精准洞察分布→学习散点图制作与分析,掌握数据分布洞察技巧。
【函数】新手村试炼:计算度量入门挑战→函数入门实战,轻松攻克计算度量基础。
【图表应用】热力地图:看透市场浓度的战略眼→掌握热力地图绘制,直观识别市场热度分布。
BI知识闯关】精细化管控数据导出,让敏感数据无处泄露!》→巩固数据安全知识,提升管控实战能力。
【新年活动】年货采购数据侦探→结合新年主题,锻炼数据筛选与分析能力。

三、场景应用精选

价值引擎:汽车制造财务分析主题课程→延续财务数据分析实战,助力企业决策与价值挖掘。
【地图】散点地图:精确落位,洞察分布→学习散点地图应用,实现地理位置数据的可视化呈现。
【地图】热力地图:一眼识别业务“高地”与“洼地”》→掌握热力地图在业务分析中的实战应用。

四、二次开发视频更新

Excel导入模板扩展校验类》→深入学习Excel导入功能的扩展校验技术,提升数据导入的准确性与规范性。

五、活动进行中

新年第①弹|年货采购数据挑战:你能答对几题?》→趣味数据挑战赛,检验你的数据分析能力,赢取开年好礼。

六、官方通知发布

2025年度任务排行榜大揭晓!》→回顾2025年度学习成果,揭晓任务完成排行榜,激励持续学习。

七、函数应用入门

【函数课堂】函数总览篇:告别“不会用计算度量”的焦虑》→系统讲解函数使用,帮助你轻松入门计算度量,摆脱使用困惑。

麦粉社区
>
帖子详情

LEN函数怎么使用呢

数据分析 发表于 2023-4-23 11:56
发表于 2023-4-23 11:56:42

LEN([字段名])这个写法没问题吧,使用报错了


日志:


2023-04-23 11:51:02.152 [993] INFO launcher.DefaultLauncher.run:59 - Task start. (id:76fac8d9c16e6859656a643cd0db203e,name:DERIVE_COLUMN)
2023-04-23 11:51:02.163 [993] INFO repository.NodeStatusRepository.executeUpdate:138 - Report status successful.(state:RUNNING)
2023-04-23 11:51:02.170 [993] WARN sql.SparkSession$Builder.logWarning:69 - Using an existing SparkSession; some spark core configurations may not take effect.
2023-04-23 11:51:02.170 [993] INFO node.GenericNode.start:107 - Node start. (id:76fac8d9c16e6859656a643cd0db203e,name:DERIVE_COLUMN)
2023-04-23 11:51:02.203 [993] INFO datasources.InMemoryFileIndex.logInfo:57 - It took 3 ms to list leaf files for 1 paths.
2023-04-23 11:51:02.283 [993] INFO spark.SparkContext.logInfo:57 - Starting job: parquet at DatasetEvent.java:229
2023-04-23 11:51:02.325 [993] INFO scheduler.DAGScheduler.logInfo:57 - Job 77 finished: parquet at DatasetEvent.java:229, took 0.039619 s
2023-04-23 11:51:02.332 [993] INFO util.EventSerializeUtil.deserialize:106 - Deserialization event finished,took 0.162 s
2023-04-23 11:51:02.432 [993] ERROR node.GenericNode.handleExecuteError:148 - Node execution failed.(id:76fac8d9c16e6859656a643cd0db203e,name:DERIVE_COLUMN)
org.apache.spark.sql.AnalysisException: Undefined function: 'len'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 11
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.$anonfun$applyOrElse$121(Analyzer.scala:2108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:2108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:2099) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:318) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:318) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:323) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:408) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:406) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:359) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:323) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:94) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:132) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) ~[scala-library-2.12.10.jar:?]
        at scala.collection.immutable.List.foreach(List.scala:392) ~[scala-library-2.12.10.jar:?]
        at scala.collection.TraversableLike.map(TraversableLike.scala:238) ~[scala-library-2.12.10.jar:?]
        at scala.collection.TraversableLike.map$(TraversableLike.scala:231) ~[scala-library-2.12.10.jar:?]
        at scala.collection.immutable.List.map(List.scala:298) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:132) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:137) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:244) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:137) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:94) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:85) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:153) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:152) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$2(AnalysisHelper.scala:110) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:74) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:110) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:223) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:108) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:106) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:73) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:72) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveExpressions(AnalysisHelper.scala:152) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveExpressions$(AnalysisHelper.scala:151) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveExpressions(LogicalPlan.scala:29) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:2099) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:2096) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:216) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.IndexedSeqOptimized.foldLeft(IndexedSeqOptimized.scala:60) ~[scala-library-2.12.10.jar:?]
        at scala.collection.IndexedSeqOptimized.foldLeft$(IndexedSeqOptimized.scala:68) ~[scala-library-2.12.10.jar:?]
        at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:38) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:213) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:205) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at scala.collection.immutable.List.foreach(List.scala:392) ~[scala-library-2.12.10.jar:?]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:205) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:198) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:192) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:155) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:183) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:183) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:176) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:230) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:175) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111) ~[spark-catalyst_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:98) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:618) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:613) ~[spark-sql_2.12-3.1.3.jar:3.1.3]
        at smartbix.datamining.engine.execute.node.preprocess.DeriveColumnNode.execute(DeriveColumnNode.java:67) ~[EngineCommonNode-1.0.jar:?]
        at smartbix.datamining.engine.execute.node.GenericNode.start(GenericNode.java:117) ~[EngineCore-1.0.jar:?]
        at smartbix.datamining.engine.agent.execute.executor.DefaultNodeExecutor.execute(DefaultNodeExecutor.java:43) ~[EngineAgent-1.0.jar:?]
        at smartbix.datamining.engine.agent.execute.launcher.DefaultLauncher.run(DefaultLauncher.java:67) ~[EngineAgent-1.0.jar:?]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_202-ea]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_202-ea]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_202-ea]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_202-ea]
        at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_202-ea]
2023-04-23 11:51:02.449 [993] INFO repository.NodeStatusRepository.executeUpdate:138 - Report status successful.(state:FAIL)
发表于 2023-4-23 13:37:55
当前什么使用场景下用到了len函数呢?看报错是数据库不支持len函数的使用

回复

使用道具 举报

高级模式
B Color Image Link Quote Code Smilies
您需要登录后才可以回帖 登录 | 立即注册

5回帖数 0关注人数 718浏览人数
最后回复于:2023-4-23 13:37
快速回复 返回顶部 返回列表